Updates from: 05/14/2022 01:07:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Previously updated : 11/23/2021 Last updated : 05/13/2022
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
+This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png)
Watch this video to learn about Azure AD B2C custom domain.
The following diagram illustrates Azure Front Door integration:
-1. From an application, a user selects the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name.
+1. From an application, a user selects the sign in button, which takes them to the Azure AD B2C sign in page. This page specifies a custom domain name.
1. The web browser resolves the custom domain name to the Azure Front Door IP address. During DNS resolution, a canonical name (CNAME) record with a custom domain name points to your Front Door default front-end host (for example, `contoso-frontend.azurefd.net`). 1. The traffic addressed to the custom domain (for example, `login.contoso.com`) is routed to the specified Front Door default front-end host (`contoso-frontend.azurefd.net`). 1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the original custom domain name.
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows.-- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *&lt;tenant-name&gt;.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name).
+- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *&lt;tenant-name&gt;.b2clogin.com* (unless you're using a custom policy and you [block access](#optional-block-access-to-the-default-domain-name).
- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. ## Prerequisites
Follow these steps to add a custom domain to your Azure AD B2C tenant:
|login | TXT | MS=ms12345678 | |account | TXT | MS=ms87654321 |
- The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are configured incorrectly.
+ The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD won't be able to verify the custom domain you added. In the following examples, both records are configured incorrectly.
|Name (hostname) |Type |Data | ||||
Follow these steps to add a custom domain to your Azure AD B2C tenant:
> [!TIP] > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md).
-1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not the top-level domain *contoso.com*.
+1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
> [!IMPORTANT] > After the domain is verified, **delete** the DNS TXT record you created.
Follow these steps to add a custom domain to your Azure AD B2C tenant:
## Step 2. Create a new Azure Front Door instance
-Follow these steps to create a Front Door for your Azure AD B2C tenant. For more information, see [creating a Front Door for your application](../frontdoor/quickstart-create-front-door.md#create-a-front-door-for-your-application).
-
+Follow these steps to create an Azure Front Door:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To choose the directory that contains the Azure subscription that youΓÇÖd like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant, select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
-1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door**.
-1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**.
-
- | Setting | Value |
- | | |
- | **Subscription** | Select your Azure subscription. |
- | **Resource group** | Select an existing resource group, or select **Create new** to create a new one.|
- | **Resource group location** | Select the location of the resource group. For example, **Central US**. |
-
-### 2.1 Add frontend host
-
-The frontend host is the domain name used by your application. When you create a Front Door, the default frontend host is a subdomain of `azurefd.net`.
-
-Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example, `https://login.contoso.com`.
-
-To add a frontend host, follow these steps:
-
-1. In **Frontends/domains**, select **+** to open **Add a frontend host**.
-1. For **Host name**, enter a globally unique hostname. The host name is not your custom domain. This example uses *contoso-frontend*. Select **Add**.
-
- ![Screenshot demonstrates how to add a frontend host.](./media/custom-domain/add-frontend-host-azure-front-door.png)
-
-### 2.2 Add backend and backend pool
-
-A backend refers to your [Azure AD B2C tenant name](tenant-management.md#get-your-tenant-name), `tenant-name.b2clogin.com`. To add a backend pool, follow these steps:
-1. Still in **Create a Front Door**, in **Backend pools**, select **+** to open **Add a backend pool**.
-
-1. Enter a **Name**. For example, *myBackendPool*. Select **Add a backend**.
+1. To choose the directory that contains the Azure subscription that youΓÇÖd like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant:
- The following screenshot demonstrates how to create a backend pool:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
- ![Screenshot demonstrates how to add a frontend backend pool.](./media/custom-domain/front-door-add-backend-pool.png)
-
-1. In the **Add a backend** blade, select the following information, and then select **Add**.
-
- | Setting | Value |
- | | |
- | **Backend host type**| Select **Custom host**.|
- | **Backend host name**| Select the name of your [Azure AD B2C](tenant-management.md#get-your-tenant-name), `<tenant-name>.b2clogin.com`. For example, contoso.b2clogin.com.|
- | **Backend host header**| Select the same value you selected for **Backend host name**.|
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch** button next to the directory.
- **Leave all other fields default.*
+1. Follow the steps in [Create Front Door profile - Quick Create](../frontdoor/create-front-door-portal.md#create-front-door-profilequick-create) to create a Front Door for your Azure AD B2C tenant using the following settings:
+
- The following screenshot demonstrates how to create a custom host backend that is associated with an Azure AD B2C tenant:
+ |Key |Value |
+ |||
+ |Subscription|Select your Azure subscription.|
+ |Resource group| Select an existing resource group, or create a new one.|
+ |Name| Give your profile a name such as `b2cazurefrontdoor`.|
+ |Tier| Select either Standard or Premium tier. Standard tier is content delivery optimized. Premium tier builds on Standard tier and is focused on security. See [Tier Comparison](../frontdoor/standard-premium/tier-comparison.md).|
+ |Endpoint name| Enter a globally unique name for your endpoint, such as `b2cazurefrontdoor`. The **Endpoint hostname** is generated automatically. |
+ |Origin type| Select `Custom`.|
+ |Origin host name| Enter `<tenant-name>.b2clogin.com`. Replace `<tenant-name>` with the [name of your Azure AD B2C tenant](tenant-management.md#get-your-tenant-name).|
- ![Screenshot demonstrates how to add a custom host backend.](./media/custom-domain/add-a-backend.png)
-
-1. To complete the configuration of the backend pool, on the **Add a backend pool** blade, select **Add**.
+ Leave the **Caching** and **WAF policy** empty.
-1. After you add the **backend** to the **backend pool**, disable the **Health probes**.
-
- ![Screenshot demonstrates how to add a backend pool and disable the health probes.](./media/custom-domain/add-a-backend-pool.png)
-
-### 2.3 Add a routing rule
-
-Finally, add a routing rule. The routing rule maps your frontend host to the backend pool. The rule forwards a request for the [frontend host](#21-add-frontend-host) to the Azure AD B2C [backend](#22-add-backend-and-backend-pool). To add a routing rule, follow these steps:
-
-1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
-1. Select **Review + Create**, and then **Create**.
-
- ![Screenshot demonstrates how to create Azure Front Door.](./media/custom-domain/configuration-azure-front-door.png)
+
+1. Once the Azure Front Door resource is created, select **Overview**, and copy the **Endpoint hostname**. It looks something like `b2cazurefrontdoor-ab123e.z01.azurefd.net`.
## Step 3. Set up your custom domain on Azure Front Door
-In this step, you add the custom domain you registered in [Step 1](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) to your Front Door.
+In this step, you add the custom domain you registered in [Step 1](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) to your Azure Front Door.
-### 3.1 Create a CNAME DNS record
+### 3.1. Create a CNAME DNS record
-Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to your Front Door's default frontend host (say contoso-frontend.azurefd.net).
+To add the custom domain, create a canonical name (CNAME) record with your domain provider. A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname that you configured in [Step 2. Create a new Azure Front Door instance](#step-2-create-a-new-azure-front-door-instance). For example, `b2cazurefrontdoor-ab123e.z01.azurefd.net`.
-A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in [step 2.1](#21-add-frontend-host).
-
-After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as login.contoso.com) is routed to the specified destination Front Door default frontend host, such as `contoso-frontend.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md).
+After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as `login.contoso.com`) is routed to the specified destination Front Door default frontend host, such as `contoso-frontend.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md).
To create a CNAME record for your custom domain:
To create a CNAME record for your custom domain:
- Type: Enter *CNAME*.
- - Destination: Enter your default Front Door frontend host you create in [step 2.1](#21-add-frontend-host). It must be in the following format:_&lt;hostname&gt;_.azurefd.net. For example, `contoso-frontend.azurefd.net`.
+ - Destination: Enter your default Front Door frontend host you create in [step 2](#step-2-create-a-new-azure-front-door-instance). It must be in the following format:_&lt;hostname&gt;_.azurefd.net. For example, `contoso-frontend.azurefd.net`.
1. Save your changes.
-### 3.2 Associate the custom domain with your Front Door
+### 3.2. Associate the custom domain with your Front Door
-After you've registered your custom domain, you can then add it to your Front Door.
-
-1. On the **Front Door designer** page, under the **Frontends/domains**, select **+** to add a custom domain.
+1. In the Azure portal home, search for and select the `myb2cazurefrontdoor` Azure Front Door resource to open it.
- ![Screenshot demonstrates how to add a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain.png)
-
-1. For **Frontend host**, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: *&lt;default hostname&gt;*.azurefd.net. It cannot be changed.
+1. In the left menu, under **Settings**, select **Domains**.
+
+1. Select **Add a domain**.
+
+1. For **DNS management**, select **All other DNS services**.
+
+1. For **Custom domain**, enter your custom domain, such as `login.contoso.com`.
-1. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, login.contoso.com.
+1. Keep the other values as defaults, and then select **Add**. Your custom domain is added to the list.
- ![Screenshot demonstrates how to verify a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain-verification.png)
+1. Under **Validation state** of the domain that you just added, select **Pending**. A pane with a TXT record info opens.
- Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+ 1. Sign in to the web site of the domain provider for your custom domain.
+ 1. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name Server Management**.
+
+ 1. Create a new TXT DNS record and complete the fields as shown below:
+ 1. Name: `_dnsauth.contoso.com`, but you need to enter just `_dnsauth`.
+ 1. Type: `TXT`
+ 1. Value: Something like `75abc123t48y2qrtsz2bvk......`.
-1. After the custom domain name is verified, under the **Custom domain name HTTPS**, select **Enabled**.
+ After you add the TXT DNS record, the **Validation state** in the Front Door resource will eventually change from **Pending** to **Approved**. You may need to reload your page for the change to happen.
- ![Screenshot shows how to enable HTTPS using an Azure Front Door certificate.](./media/custom-domain/azure-front-door-add-custom-domain-https-settings.png)
+1. Go back to your Azure portal. Under **Endpoint association** of the domain that you just added, select **Unassociated**.
-1. For the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate). If you choose the *Front Door managed* option, wait until the certificate is fully provisioned.
+1. For **Select endpoint**, select the hostname endpoint from the dropdown.
-1. Select **Add**.
+1. For **Select routes** list, select **default-route**, and then select **Associate**.
-### 3.3 Update the routing rule
+### 3.3. Enable the route
-1. In the **Routing rules**, select the routing rule you created in [step 2.3](#23-add-a-routing-rule).
+The **default-route** routes the traffic from the client to Azure Front Door. Then, Azure Front Door uses your configuration to send the traffic to Azure AD B2C. Follow these steps to enable the default-route.
- ![Screenshot demonstrates how to select a routing rule.](./media/custom-domain/select-routing-rule.png)
+1. Select **Front Door manager**.
+1. To add enable the **default-route**, first expand an endpoint from the list of endpoints in the Front Door manager. Then, select the **default-route**.
-1. Under the **Frontends/domains**, select your custom domain name.
-
- ![Screenshot demonstrates how to update the Azure Front Door routing rule.](./media/custom-domain/update-routing-rule.png)
+ The following screenshot shows how to select the default-route.
+
+ ![Screenshot of selecting the default route.](./media/custom-domain/enable-the-route.png)
-1. Select **Update**.
-1. From the main window, select **Save**.
+1. Select the **Enable route** checkbox.
+1. Select **Update** to save the changes.
## Step 4. Configure CORS
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
## Test your custom domain 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows (policies)**. 1. Select a user flow, and then select **Run user flow**. 1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Copy the URL under **Run user flow endpoint**.
- ![Screenshot demonstrates how to copy the authorization request URI.](./media/custom-domain/user-flow-run-now.png)
+ ![Screenshot of how to copy the authorization request U R I.](./media/custom-domain/user-flow-run-now.png)
-1. To simulate a sign-in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_&lt;tenant-name&gt;_.b2clogin.com) with your custom domain.
+1. To simulate a sign in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_&lt;tenant-name&gt;_.b2clogin.com) with your custom domain.
For example, instead of:
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login ```
-1. Verify that the Azure AD B2C is loaded correctly. Then, sign-in with a local account.
+1. Verify that the Azure AD B2C is loaded correctly. Then, sign in with a local account.
1. Repeat the test with the rest of your policies. ## Configure your identity provider
-When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign-in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name:
+When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name:
```http https://<tenant-name>.b2clogin.com/<tenant-name>/oauth2/authresp ```
-If you configured your policy to allow sign-in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name.
+If you configured your policy to allow sign in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name.
In the following redirect URI:
https://<domain-name>/11111111-1111-1111-1111-111111111111/v2.0/
::: zone pivot="b2c-custom-policy"
-## Block access to the default domain name
+## (Optional) Block access to the default domain name
After you add the custom domain and configure your application, users will still be able to access the &lt;tenant-name&gt;.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message.
After you add the custom domain and configure your application, users will still
::: zone-end +
+## (Optional) Azure Front Door advanced configuration
+
+You can use Azure Front Door advanced configuration, such as [Azure Web Application Firewall (WAF)](partner-azure-web-application-firewall.md). Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
+
+When using custom domains, consider the following points:
+
+- The WAF policy must be the same tier as the Azure Front Door profile. For more information about how to create a WAF policy to use with Azure Front Door, see [Configure WAF policy](../frontdoor/how-to-configure-endpoints.md).
+- The WAF managed rules feature isn't officially supported as it can cause false positives and prevent legitimate requests from passing through, so only use WAF custom rules if they meet your needs.
+ ## Troubleshooting ### Azure AD B2C returns a page not found error -- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get an HTTP 404 error message.
+- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get an HTTP 404 error message.
- **Possible causes** - This issue could be related to the DNS configuration or the Azure Front Door backend configuration. - **Resolution**:
- 1. Make sure the custom domain is [registered and successfully verified](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant.
- 1. Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso-frontend.azurefd.net).
- 1. Make sure the [Azure Front Door backend pool configuration](#22-add-backend-and-backend-pool) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored.
+ - Make sure the custom domain is [registered and successfully verified](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant.
+ - Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso-frontend.azurefd.net).
+
+### Our services aren't available right now
+
+- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get the following error message: *Our services aren't available right now. We're working to restore all services as soon as possible. Please check back soon.*
+- **Possible causes** - This issue could be related to the Azure Front Door route configuration.
+- **Resolution**: Check the status of the **default-route**. If it's disabled, [Enable the route](#33-enable-the-route). The following screenshot shows how the default-route should look like:
+ ![Screenshot of the status of the default-route.](./media/custom-domain/azure-front-door-route-status.png)
-### Azure AD B2C returns the resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
+### Azure AD B2C returns the resource you're looking for has been removed, had its name changed, or is temporarily unavailable.
-- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get *the resource you are looking for has been removed, had its name changed, or is temporarily unavailable* error message.
+- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get *the resource you are looking for has been removed, had its name changed, or is temporarily unavailable* error message.
- **Possible causes** - This issue could be related to the Azure AD custom domain verification. - **Resolution**: Make sure the custom domain is [registered and **successfully verified**](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant. ### Identify provider returns an error - **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity provider presents an error message.-- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message.
+- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI isn't yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message.
- **Resolution** - Follow the steps in [Configure your identity provider](#configure-your-identity-provider) to add the new redirect URI. ## Frequently asked questions
-### Can I use Azure Front Door advanced configuration, such as *Web application firewall Rules*?
-
-While Azure Front Door advanced configuration settings are not officially supported, you can use them at your own risk.
- ### When I use Run Now to try to run my policy, why I can't see the custom domain? Copy the URL, change the domain name manually, and then paste it back to your browser.
Copy the URL, change the domain name manually, and then paste it back to your br
Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy.
-### Can I use a third-party web application firewall (WAF) with B2C?
-
-To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom policies.
+### Can I use a third-party Web Application Firewall (WAF) with B2C?
+Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). However, you must test WAF to ensure that it doesn't block or alert legitimate requests to Azure AD B2C user flows or custom policies. Learn how to configure [Akamai WAF](partner-akamai.md) and [Cloudflare WAF](partner-cloudflare.md) with Azure AD B2C.
+
### Can my Azure Front Door instance be hosted in a different subscription than my Azure AD B2C tenant? Yes, Azure Front Door can be in a different subscription.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
To enable sign-in for users with a Google account in Azure Active Directory B2C
1. In the upper-left corner of the page, select the project list, and then select **New Project**. 1. Enter a **Project Name**, select **Create**. 1. Make sure you are using the new project by selecting the project drop-down in the top-left of the screen. Select your project by name, then select **Open**.
-1. In the left menu, select **OAuth consent screen**, select **External**, and then select **Create**.
+1. In the left menu, select **APIs and services** and then **OAuth consent screen**. Select **External** and then select **Create**.
1. Enter a **Name** for your application. 1. Select a **User support email**.
+ 1. In the **App domain** section, enter a link to your **Application home page**, a link to your **Application privacy policy**, and a link to your **Application terms of service**.
1. In the **Authorized domains** section, enter *b2clogin.com*. 1. In the **Developer contact information** section, enter comma separated emails for Google to notify you about any changes to your project. 1. Select **Save**.
If the sign-in process is successful, your browser is redirected to `https://jwt
- Check out the Google federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google), and how to pass Google access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google-with-access-token)
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
Here are some ways you can use workload identities:
- Review service principals and applications that are assigned to privileged directory roles in Azure AD using [access reviews for service principals](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md). - Access Azure AD protected resources without needing to manage secrets (for supported scenarios) using [workload identity federation](workload-identity-federation.md). - Apply Conditional Access policies to service principals owned by your organization using [Conditional Access for workload identities](../conditional-access/workload-identity.md).
+- Secure workload identities with [Identity Protection](../identity-protection/concept-workload-identity-risk.md).
## Next steps
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). - ## April 2022
-### General Availability- Microsoft Defender for Cloud for Endpoint Signal in Identity Protection
-
+### General Availability - Microsoft Defender for Endpoint Signal in Identity Protection
**Type:** New feature **Service category:** Identity Protection **Product capability:** Identity Security & Protection
-Identity Protection now integrates a signal from Microsoft Defender for Cloud for Endpoint (MDE) that will protect against PRT theft detection. To learn more, see: [What is risk? Azure AD Identity Protection | Microsoft Docs](../identity-protection/concept-identity-protection-risks.md).
+Identity Protection now integrates a signal from Microsoft Defender for Endpoint (MDE) that will protect against PRT theft detection. To learn more, see: [What is risk? Azure AD Identity Protection | Microsoft Docs](../identity-protection/concept-identity-protection-risks.md).
### General availability - Entitlement management 3 stages of approval - **Type:** Changed feature **Service category:** Other **Product capability:** Entitlement Management
This update extends the Azure AD entitlement management access package policy to
### General Availability - Improvements to Azure AD Smart Lockout - **Type:** Changed feature **Service category:** Identity Protection **Product capability:** User Management
With a recent improvement, Smart Lockout now synchronizes the lockout state acro
- ### Public Preview - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding. **Type:** New feature
Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-i
### Public Preview - Integration of Microsoft 365 App Certification details into AAD UX and Consent Experiences - **Type:** New feature **Service category:** User Access Management **Product capability:** AuthZ/Access Delegation
Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-i
### Public preview - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels - **Type:** New feature **Service category:** Access Reviews **Product capability:** Identity Governance - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels. For more information, see: [Include B2B direct connect users and teams accessing Teams Shared Channels in access reviews (preview)](../governance/create-access-review.md#include-b2b-direct-connect-users-and-teams-accessing-teams-shared-channels-in-access-reviews-preview).
Use Azure AD access reviews to review access of B2B direct connect users in Team
**Product capability:** Identity Security & Protection **Clouds impacted:** Public (Microsoft 365, GCC) - We're announcing the public preview of following MS Graph APIs and PowerShell cmdlets for configuring federated settings when federated with Azure AD: - |Action |MS Graph API |PowerShell cmdlet | ||||
-|Get federation settings for a federated domain | [Get internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-get?view=graph-rest-beta) | [Get-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta) |
-|Create federation settings for a federated domain | [Create internalDomainFederation](https://docs.microsoft.com/graph/api/domain-post-federationconfiguration?view=graph-rest-beta) | [New-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-beta) |
-|Remove federation settings for a federated domain | [Delete internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-delete?view=graph-rest-beta) | [Remove-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdomainfederationconfiguration?view=graph-powershell-beta) |
-|Update federation settings for a federated domain | [Update internalDomainFederation](https://docs.microsoft.com/graph/api/internaldomainfederation-update?view=graph-rest-beta) | [Update-MgDomainFederationConfiguration](https://docs.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomainfederationconfiguration?view=graph-powershell-beta) |
--
+|Get federation settings for a federated domain | [Get internalDomainFederation](/graph/api/internaldomainfederation-get?view=graph-rest-beta&preserve-view=true) | [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Create federation settings for a federated domain | [Create internalDomainFederation](/graph/api/domain-post-federationconfiguration?view=graph-rest-beta&preserve-view=true) | [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Remove federation settings for a federated domain | [Delete internalDomainFederation](/graph/api/internaldomainfederation-delete?view=graph-rest-beta&preserve-view=true) | [Remove-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Update federation settings for a federated domain | [Update internalDomainFederation](/graph/api/internaldomainfederation-update?view=graph-rest-beta&preserve-view=true) | [Update-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
-If using older MSOnline cmdlets ([Get-MsolDomainFederationSettings](https://docs.microsoft.com/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0) and [Set-MsolDomainFederationSettings](https://docs.microsoft.com/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)), we highly recommend transitioning to the latest MS Graph APIs and PowerShell cmdlets.
+If using older MSOnline cmdlets ([Get-MsolDomainFederationSettings](/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true) and [Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true)), we highly recommend transitioning to the latest MS Graph APIs and PowerShell cmdlets.
-For more information, see [internalDomainFederation resource type - Microsoft Graph beta | Microsoft Docs](https://docs.microsoft.com/graph/api/resources/internaldomainfederation?view=graph-rest-beta).
-
+For more information, see [internalDomainFederation resource type - Microsoft Graph beta | Microsoft Docs](/graph/api/resources/internaldomainfederation?view=graph-rest-beta&preserve-view=true).
Added functionality to session controls allowing admins to reauthenticate a user
**Product capability:** Identity Security & Protection **Clouds impacted:** Public (Microsoft 365, GCC)
-We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](https://docs.microsoft.com/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values).
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
-We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
In April 2022 we added the following 24 new applications in our App gallery with
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. - For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
For listing your application in the Azure AD app gallery, please read the detail
From April 15, 2022, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with a Japan billing address within the Japanese data centers. For more information, see: [Customer data storage for Japan customers in Azure Active Directory](active-directory-data-storage-japan.md). - - ### Public Preview - New provisioning connectors in the Azure AD Application Gallery - April 2022 **Type:** New feature
You can now automate creating, updating, and deleting user accounts for these ne
For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md) - ## March 2022
For more information about how to better secure your organization by using autom
- ### Public preview - Azure AD Recommendations **Type:** New feature
You can also find the documentation of all the applications from here https://ak
For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest -
For listing your application in the Azure AD app gallery, please read the detail
- ## February 2022 - ### General Availability - France digital accessibility requirement **Type:** Plan for change
This change provides users who are signing into Azure Active Directory on iOS, A
- ### General Availability - Downloadable access review history report **Type:** New feature
With Azure Active Directory (Azure AD) Access Reviews, you can create a download
- - ### Public Preview of Identity Protection for Workload Identities **Type:** New feature
Azure AD Identity Protection is extending its core capabilities of detecting, in
- ### Public Preview - Cross-tenant access settings for B2B collaboration **Type:** New feature
Cross-tenant access settings enable you to control how users in your organizatio
- ### Public preview - Create Azure AD access reviews with multiple stages of reviewers **Type:** New feature
You can also find the documentation of all the applications from here: [https://
For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md) -
We have improved the Privileged Identity management (PIM) time to role activatio
- ## January 2022 ### Public preview - Custom security attributes
WeΓÇÖre no longer publishing sign-in logs with the following error codes because
|Error code | Failure reason| | | |
-|50058| Session information isnΓÇÖt sufficient for single-sign-on.|
+|50058| Session information isnΓÇÖt sufficient for single-sign-on.|
|16000| Either multiple user identities are available for the current request or selected account isnΓÇÖt supported for the scenario.| |500581| Rendering JavaScript. Fetching sessions for single-sign-on on V2 with prompt=none requires JavaScript to verify if any MSA accounts are signed in.| |81012| The user trying to sign in to Azure AD is different from the user signed into the device.|
Updated "switch organizations" user interface in My Account. This visually impro
++
active-directory Migrate Okta Sync Provisioning To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning-to-azure-active-directory.md
The example will grab *all* on-premises Azure AD users and export a list of thei
1. Run these commands in PowerShell on a domain controller on-premises: ```PowerShell
- Get-ADUser -Filter * -Properties objectGUID | Select-Object
+ Get-ADUser -Filter * -Properties objectGUID | Select -Object
UserPrincipalName, Name, objectGUID, @{Name = 'ImmutableID'; Expression = {
- [system.convert\]::ToBase64String(([GUID\]\$_.objectGUID).ToByteArray())
- } } | export-csv C:\\Temp\\OnPremIDs.csv
+ [system.convert]::ToBase64String((GUID).tobytearray())
+ } } | export-csv C:\Temp\OnPremIDs.csv
``` ![Screenshot that shows domain controller on-premises commands.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/domain-controller.png)
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
# Understand how users are assigned to apps
-This article help you to understand how users get assigned to an application in your tenant.
+This article helps you to understand how users get assigned to an application in your tenant.
## How do users get assigned an application in Azure AD?
There are several ways a user can be assigned an application. Assignment can be
* An administrator enables [Self-service Application Access](./manage-self-service-access.md) to allow a user to add an application using [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) **Add App** feature, but only **with prior approval from a selected set of business approvers** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to **without business approval** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to, but only **with prior approval from a selected set of business approvers**
+* One of the application's roles is included in an [entitlement management access package](../governance/entitlement-management-access-package-resources.md), and a user requests or is assigned to that access package
* An administrator assigns a license to a user directly, for a Microsoft service such as [Microsoft 365](https://products.office.com/) * An administrator assigns a license to a group that the user is a member of, for a Microsoft service such as [Microsoft 365](https://products.office.com/) * A user [consents to an application](consent-and-permissions-overview.md#user-consent) on behalf of themselves.
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
na Previously updated : 12/10/2020 Last updated : 05/12/2022
Managed identities for Azure resources provide Azure services with an identity i
In this article, you learn how to assign a managed identity to an application role exposed by another application using Azure AD PowerShell. - ## Prerequisites - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**. - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing. - To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top-right corner of code blocks.
- - Run scripts locally by installing the latest version of [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2).
+ - Run scripts locally by installing the latest version of [the Az PowerShell module](/powershell/azure/install-az-ps) and the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started).
## Assign a managed identity access to another application's app role
In this article, you learn how to assign a managed identity to an application ro
1. Find the object ID of the service application's service principal. You can find this using the Azure portal. Go to Azure Active Directory and open the **Enterprise applications** page, then find the application and look for the **Object ID**. You can also find the service principal's object ID by its display name using the following PowerShell script: ```powershell
- $serverServicePrincipalObjectId = (Get-AzureADServicePrincipal -Filter "DisplayName eq '$applicationName'").ObjectId
+ $serverServicePrincipalObjectId = (Get-MgServicePrincipal -Filter "DisplayName eq '$applicationName'").Id
``` > [!NOTE]
In this article, you learn how to assign a managed identity to an application ro
* `serverServicePrincipalObjectId`: the object ID of the server application's service principal, which you found in step 4. * `appRoleId`: the ID of the app role exposed by the server app, which you generated in step 5 - in the example, the app role ID is `0566419e-bb95-4d9d-a4f8-ed9a0f147fa6`.
- Execute the following PowerShell script to add the role assignment:
+ Execute the following PowerShell command to add the role assignment:
```powershell
- New-AzureADServiceAppRoleAssignment -ObjectId $managedIdentityObjectId -Id $appRoleId -PrincipalId $managedIdentityObjectId -ResourceId $serverServicePrincipalObjectId
+ New-MgServicePrincipalAppRoleAssignment `
+ -ServicePrincipalId $managedIdentityObjectId `
+ -PrincipalId $managedIdentityObjectId `
+ -ResourceId $serverServicePrincipalObjectId `
+ -AppRoleId $appRoleId
``` ## Complete script
In this article, you learn how to assign a managed identity to an application ro
This example script shows how to assign an Azure web app's managed identity to an app role. ```powershell
-# Install the module. (You need admin on the machine.)
-# Install-Module AzureAD
+# Install the module.
+# Install-Module Microsoft.Graph -Scope CurrentUser
# Your tenant ID (in the Azure portal, under Azure Active Directory > Overview). $tenantID = '<tenant-id>'
$appRoleName = '<app-role-name>' # For example, MyApi.Read.All
# Look up the web app's managed identity's object ID. $managedIdentityObjectId = (Get-AzWebApp -ResourceGroupName $resourceGroupName -Name $webAppName).identity.principalid
-Connect-AzureAD -TenantId $tenantID
+Connect-MgGraph -TenantId $tenantId -Scopes 'Application.Read.All','Application.ReadWrite.All','AppRoleAssignment.ReadWrite.All','Directory.AccessAsUser.All','Directory.Read.All','Directory.ReadWrite.All'
# Look up the details about the server app's service principal and app role.
-$serverServicePrincipal = (Get-AzureADServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
+$serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId $appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role.
-New-AzureADServiceAppRoleAssignment `
- -ObjectId $managedIdentityObjectId `
- -Id $appRoleId `
+New-MgServicePrincipalAppRoleAssignment `
+ -ServicePrincipalId $managedIdentityObjectId `
-PrincipalId $managedIdentityObjectId `
- -ResourceId $serverServicePrincipalObjectId
+ -ResourceId $serverServicePrincipalObjectId `
+ -AppRoleId $appRoleId
``` ## Next steps
active-directory Admin Units Members Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md
Previously updated : 03/22/2022 Last updated : 05/13/2022
You can add or remove users or devices for administrative units manually. With this preview, you can add or remove users or devices for administrative units dynamically using rules. This article describes how to create administrative units with dynamic membership rules using the Azure portal, PowerShell, or Microsoft Graph API.
+> [!NOTE]
+> Dynamic membership rules for administrative units can be created using the same attributes available for dynamic groups. For more information about the specific attributes available and examples on how to use them, see [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md).
+ Although administrative units with members assigned manually support multiple object types, such as user, group, and devices, it is currently not possible to create an administrative unit with dynamic membership rules that includes more than one object type. For example, you can create administrative units with dynamic membership rules for users or devices, but not both. Administrative units with dynamic membership rules for groups are currently not supported. ## Prerequisites
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
It also provides other features, such as:
- Run commands in the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console). - Download IIS diagnostic dumps or Docker logs. - Manage IIS processes and site extensions.-- Add deployment webhooks for Windows aps.
+- Add deployment webhooks for Windows apps.
- Allow ZIP deployment UI with `/ZipDeploy`. - Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). - Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API).
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
Title: Configure Azure Application Gateway Private Link
+ Title: Configure Azure Application Gateway Private Link (preview)
description: This article shows you how to configure Application Gateway Private Link. -+ Last updated 05/09/2022
-# Configure Azure Application Gateway Private Link
+# Configure Azure Application Gateway Private Link (preview)
Application Gateway Private Link allows you to connect your workloads over a private connection spanning across VNets and subscriptions. For more information, see [Application Gateway Private Link](private-link.md). :::image type="content" source="media/private-link/private-link.png" alt-text="Diagram showing Application Gateway Private Link":::
+> [!IMPORTANT]
+> Azure Application Gateway Private Link is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Configuration options
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
Title: Azure Application Gateway Private Link
+ Title: Azure Application Gateway Private Link (preview)
description: This article is an overview of Application Gateway Private Link. -+ Last updated 05/09/2022
-# Application Gateway Private Link
+# Application Gateway Private Link (preview)
Today, you can deploy your critical workloads securely behind Application Gateway, gaining the flexibility of Layer 7 load balancing features. Access to the backend workloads is possible in two ways:
Private Link for Application Gateway allows you to connect workloads over a priv
:::image type="content" source="media/private-link/private-link.png" alt-text="Diagram showing Application Gateway Private Link":::
+> [!IMPORTANT]
+> Azure Application Gateway Private Link is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Features and capabilities
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-arc.md
Previously updated : 03/22/2022 Last updated : 05/12/2022 # Azure Automanage for Machines Best Practices - Azure Arc-enabled servers
For all of these services, we will auto-onboard, auto-configure, monitor for dri
Automanage supports the following operating systems for Azure Arc-enabled servers -- Windows Server 2012/R2-- Windows Server 2016-- Windows Server 2019
+- Windows Server 2012 R2, 2016, 2019, 2022
- CentOS 7.3+, 8 - RHEL 7.4+, 8-- Ubuntu 16.04 and 18.04
+- Ubuntu 16.04, 18.04, 20.04
- SLES 12 (SP3-SP5 only) ## Participating services
Automanage supports the following operating systems for Azure Arc-enabled server
|--||-| |[Machines Insights Monitoring](../azure-monitor/vm/vminsights-overview.md) |Azure Monitor for machines monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. |Production | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test |
+|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. This is also only supported for Windows Server 2016 and above. |Production, Dev/Test |
|[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test | |[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test |
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-virtual-machines.md
Previously updated : 10/19/2021 Last updated : 5/12/2022
The only time you might need to interact with this machine to manage these servi
## Enabling Automanage for VMs using Azure Policy You can also enable Automanage on VMs at scale using the built-in Azure Policy. The policy has a DeployIfNotExists effect, which means that all eligible VMs located within the scope of the policy will be automatically onboarded to Automanage VM Best Practices.
-A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F270610db-8c04-438a-a739-e8e6745b22d3).
+A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c55).
For more information, check out how to enable the [Automanage built-in policy](virtual-machines-policy-enable.md).
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server.md
Last updated 03/22/2022
-# Azure Automanage for Machines Best Practices - Windows Server
+# Azure Automanage for Machines Best Practices - Windows
These Azure services are automatically onboarded for you when you use Automanage Machine Best Practices on a Windows Server VM. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
For all of these services, we will auto-onboard, auto-configure, monitor for dri
## Supported Windows Server versions
-Automanage supports the following Windows Server versions:
+Automanage supports the following Windows versions:
-- Windows Server 2012/R2
+- Windows Server 2012 R2
- Windows Server 2016 - Windows Server 2019 - Windows Server 2022 - Windows Server 2022 Azure Edition
+- Windows 10
## Participating services
azure-functions Functions Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-concurrency.md
This article describes the concurrency behaviors of event-driven triggers in Azure Functions. It also describes a new dynamic model for optimizing concurrency behaviors.
->[!NOTE]
->The dynamic concurrency model is currently in preview. Support for dynamic concurrency is limited to specific binding extensions.
- The hosting model for Functions allows multiple function invocations to run concurrently on a single compute instance. For example, consider a case where you have three different functions in your function app, which is scaled out and running on multiple instances. In this scenario, each function processes invocations on each VM instance on which your function app is running. The function invocations on a single instance share the same VM compute resources, such as memory, CPU, and connections. When your app is hosted in a dynamic plan (Consumption or Premium), the platform scales the number of function app instances up or down based on the number of incoming events. To learn more, see [Event Driven Scaling](./Event-Driven-Scaling.md)). When you host your functions in a Dedicated (App Service) plan, you manually configure your instances or [set up an autoscale scheme](dedicated-plan.md#scaling). Because multiple function invocations can run on each instance concurrently, each function needs to have a way to throttle how many concurrent invocations it's processing at any given time.
While such concurrency configurations give you control of certain trigger behavi
Ideally, we want the system to allow instances to process as much work as they can while keeping each instance healthy and latencies low, which is what dynamic concurrency is designed to do.
-## Dynamic concurrency (preview)
+## Dynamic concurrency
Functions now provides a dynamic concurrency model that simplifies configuring concurrency for all function apps running in the same plan.
When dynamic concurrency is enabled, you'll see dynamic concurrency decisions in
### Extension support
-Dynamic concurrency is enabled for a function app at the host level, and any extensions that support dynamic concurrency run in that mode. Dynamic concurrency requires collaboration between the host and individual trigger extensions. For preview, only the listed versions of the following extensions support dynamic concurrency.
+Dynamic concurrency is enabled for a function app at the host level, and any extensions that support dynamic concurrency run in that mode. Dynamic concurrency requires collaboration between the host and individual trigger extensions. Only the listed versions of the following extensions support dynamic concurrency.
#### Azure Queues
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
Title: Authentication and authorization best practices in Azure Maps
+ Title: Authentication best practices in Azure Maps
-description: Learn tips & tricks to optimize the use of Authentication and Authorization in your Azure Maps applications.
+description: Learn tips & tricks to optimize the use of Authentication in your Azure Maps applications.
Last updated 05/11/2022
-# Authentication and authorization best practices
+# Authentication best practices
The single most important part of your application is its security. No matter how good the user experience might be, if your application isn't secure a hacker can ruin it.
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
The following sample creates an association between an Azure virtual machine and
"contentVersion": "1.0.0.0", "parameters": { "vmName": {
- "value": "my-windows-vm"
+ "value": "my-azure-vm"
}, "associationName": { "value": "my-windows-vm-my-dcr"
The following sample creates an association between an Azure Arc-enabled server
"contentVersion": "1.0.0.0", "parameters": { "vmName": {
- "value": "my-windows-vm"
+ "value": "my-hybrid-vm"
}, "associationName": { "value": "my-windows-vm-my-dcr"
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
You can opt in or opt out to the common alert schema through Action Groups, on b
> - Smart detection alerts > 1. The following alert types currently do not support the common schema: > - Alerts generated by [VM insights](../vm/vminsights-overview.md)
-> - Alerts generated by [Azure Cost Management](../../cost-management-billing/manage/cost-management-budget-scenario.md)
### Through the Azure portal
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
While there are [W3C Trace Context](https://www.w3.org/TR/trace-context/) and [H
> * **Cross-component tracing is not supported for queues yet** With HTTP, if your producer and consumer send telemetry to different Application Insights resources, Transaction Diagnostics Experience and Application Map show transactions and map end-to-end. In case of queues this is not supported yet. ### Service Bus Queue
-Application Insights tracks Service Bus Messaging calls with the new [Microsoft Azure ServiceBus Client for .NET](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/) version 3.0.0 and higher.
-If you use [message handler pattern](/dotnet/api/microsoft.azure.servicebus.queueclient.registermessagehandler) to process messages, you are done: all Service Bus calls done by your service are automatically tracked and correlated with other telemetry items.
-Refer to the [Service Bus client tracing with Microsoft Application Insights](../../service-bus-messaging/service-bus-end-to-end-tracing.md) if you manually process messages.
+Refer to [Distributed tracing and correlation through Service Bus messaging](../../service-bus-messaging/service-bus-end-to-end-tracing.md#distributed-tracing-and-correlation-through-service-bus-messaging) for tracing information.
-If you use [WindowsAzure.ServiceBus](https://www.nuget.org/packages/WindowsAzure.ServiceBus/) package, read further - following examples demonstrate how to track (and correlate) calls to the Service Bus as Service Bus queue uses AMQP protocol and Application Insights doesn't automatically track queue operations.
-Correlation identifiers are passed in the message properties.
-
-#### Enqueue
-
-```csharp
-public async Task Enqueue(string payload)
-{
- // StartOperation is a helper method that initializes the telemetry item
- // and allows correlation of this operation with its parent and children.
- var operation = telemetryClient.StartOperation<DependencyTelemetry>("enqueue " + queueName);
-
- operation.Telemetry.Type = "Azure Service Bus";
- operation.Telemetry.Data = "Enqueue " + queueName;
-
- var message = new BrokeredMessage(payload);
- // Service Bus queue allows the property bag to pass along with the message.
- // We will use them to pass our correlation identifiers (and other context)
- // to the consumer.
- message.Properties.Add("ParentId", operation.Telemetry.Id);
- message.Properties.Add("RootId", operation.Telemetry.Context.Operation.Id);
-
- try
- {
- await queue.SendAsync(message);
-
- // Set operation.Telemetry Success and ResponseCode here.
- operation.Telemetry.Success = true;
- }
- catch (Exception e)
- {
- telemetryClient.TrackException(e);
- // Set operation.Telemetry Success and ResponseCode here.
- operation.Telemetry.Success = false;
- throw;
- }
- finally
- {
- telemetryClient.StopOperation(operation);
- }
-}
-```
-
-#### Process
-```csharp
-public async Task Process(BrokeredMessage message)
-{
- // After the message is taken from the queue, create RequestTelemetry to track its processing.
- // It might also make sense to get the name from the message.
- RequestTelemetry requestTelemetry = new RequestTelemetry { Name = "process " + queueName };
-
- var rootId = message.Properties["RootId"].ToString();
- var parentId = message.Properties["ParentId"].ToString();
- // Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
- requestTelemetry.Context.Operation.Id = rootId;
- requestTelemetry.Context.Operation.ParentId = parentId;
-
- var operation = telemetryClient.StartOperation(requestTelemetry);
-
- try
- {
- await ProcessMessage();
- }
- catch (Exception e)
- {
- telemetryClient.TrackException(e);
- throw;
- }
- finally
- {
- // Update status code and success as appropriate.
- telemetryClient.StopOperation(operation);
- }
-}
-```
+> [!IMPORTANT]
+> The WindowsAzure.ServiceBus and Microsoft.Azure.ServiceBus packages are deprecated.
### Azure Storage queue The following example shows how to track the [Azure Storage queue](../../storage/queues/storage-dotnet-how-to-use-queues.md) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Storage | [Blobs](../../storage/blobs/monitor-blob-storage-reference.md#resource-logs-preview), [Files](../../storage/files/storage-files-monitoring-reference.md#resource-logs-preview), [Queues](../../storage/queues/monitor-queue-storage-reference.md#resource-logs-preview), [Tables](../../storage/tables/monitor-table-storage-reference.md#resource-logs-preview) | | Azure Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) | | Azure Traffic Manager | [Traffic Manager log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) |
+| Azure Video Indexer|[Monitor Azure Video Indexer data reference](/azure/azure-video-indexer/monitor-video-indexer-data-reference)|
| Azure Virtual Network | Schema not available | | Virtual network gateways | [Logging for Virtual Network Gateways](../../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md)|
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
For example, the sample record below has useful data buried in the event descrip
![Sample extract](media/custom-fields/sample-extract.png) > [!NOTE]
-> In the Preview, you are limited to 100 custom fields in your workspace. This limit will be expanded when this feature reaches general availability.
+> In the Preview, you are limited to 500 custom fields in your workspace. This limit will be expanded when this feature reaches general availability.
## Creating a custom field When you create a custom field, Log Analytics must understand which data to use to populate its value. It uses a technology from Microsoft Research called FlashExtract to quickly identify this data. Rather than requiring you to provide explicit instructions, Azure Monitor learns about the data you want to extract from examples that you provide.
azure-monitor Powershell Workspace Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/powershell-workspace-configuration.md
try {
} # Create the workspace
-New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku Standard -ResourceGroupName $ResourceGroup
+New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku PerGB2018 -ResourceGroupName $ResourceGroup
``` ## Create workspace and configure data sources
try {
} # Create the workspace
-New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku Standard -ResourceGroupName $ResourceGroup
+New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku PerGB2018 -ResourceGroupName $ResourceGroup
# List of solutions to enable $Solutions = "Security", "Updates", "SQLAssessment"
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 04/27/2022 Last updated : 05/13/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* Australia Central * Australia Central 2 * Australia Southeast
+* East US
* East US 2 * France Central * Germany West Central
azure-portal Per Vm Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/per-vm-quota-requests.md
Title: Increase VM-family vCPU quotas description: Learn how to request an increase in the vCPU quota limit for a VM family in the Azure portal, which increases the total regional vCPU limit by the same amount. Previously updated : 1/26/2022 Last updated : 05/11/2022
Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They
This article shows how to request increases for VM-family vCPU quotas. You can also request increases for [vCPU quotas by region](regional-quota-requests.md) or [spot vCPU quotas](spot-quota.md).
-## Increase a VM-family vCPU quota
+## Adjustable and non-adjustable quotas
-To request a standard vCPU quota increase per VM-family from **Usage + quotas**:
+When requesting a quota increase, the steps differ depending on whether the quota is adjustable or non-adjustable.
-1. In the Azure portal, search for and select **Subscriptions**.
-1. Select the subscription whose quota you want to increase.
-1. In the left pane, select **Usage + quotas**.
-1. In the main pane, find the VM-family vCPU quota you want to increase, then select the pencil icon. The example below shows Standard DSv3 Family vCPUs deployed in the East US region. The **Usage** column displays the current quota usage and the current quota limit.
-1. In **Quota details**, enter your new quota limit, then select **Save and continue**.
+- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://ms.portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas.
+- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions.
- :::image type="content" source="media/resource-manager-core-quotas-request/quota-increase-example.png" alt-text="Screenshot of the Usage + quotas pane." lightbox="media/resource-manager-core-quotas-request/quota-increase-example.png":::
+## Request an increase for adjustable quotas
-Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase.
+You can submit a request for a standard vCPU quota increase per VM-family from **My quotas**, quickly accessed from [Azure Home](https://ms.portal.azure.com/#home).
+
+1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**.
+
+ > [!TIP]
+ > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it.
+
+1. On the **Overview** page, select **Compute**.
+1. On the **My quotas** page, select the quota or quotas you want to increase.
+
+ :::image type="content" source="media/per-vm-quota-requests/select-per-vm-quotas.png" alt-text="Screenshot showing per-VM quota selection in the Azure portal.":::
+
+1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s).
+
+ :::image type="content" source="media/per-vm-quota-requests/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal.":::
+
+ > [!TIP]
+ > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage.
+
+1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**.
+
+ :::image type="content" source="media/per-vm-quota-requests/per-vm-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a per-VM quota increase request.":::
+
+1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**.
+
+ :::image type="content" source="media/per-vm-quota-requests/per-vm-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a per-VM quota increase request.":::
+
+Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase.
> [!NOTE] > If your request to increase your VM-family quota is approved, Azure will automatically increase the regional vCPU quota for the region where your VM is deployed.
Your request will be reviewed, and you'll be notified whether the request is app
> [!TIP] > When creating or resizing a virtual machine and selecting your VM size, you may see some options listed under **Insufficient quota - family limit**. If so, you can request a quota increase directly from the VM creation page by selecting the **Request quota** link.
-## Increase a VM-family vCPU quota from Help + support
+## Request an increase when a quota isn't available
+
+At times you may see a message that a selected quota isnΓÇÖt available for an increase. To see which quotas are not available, look for the Information icon next to the quota name.
++
+If a quota you want to increase isn't currently available, the quickest solution may be to consider other series or regions. If you want to continue and receive assistance for your specified quota, you can submit a support request for the increase.
+
+1. When following the steps above, if a quota isn't available, select the Information icon next to the quota. Then select **Create a support request**.
+1. In the **Quota details** pane, confirm the pre-filled information is correct, then enter the desired new vCPU limit(s).
+
+ :::image type="content" source="media/per-vm-quota-requests/quota-details.png" alt-text="Screenshot of the Quota details pane in the Azure portal.":::
+
+1. Select **Save and continue** to open the **New support request** form. Continue to enter the required information, then select **Next**.
+1. Review your request information and select **Previous** to make changes, or **Create** to submit the request.
-To request a standard vCPU quota increase per VM family from **Help + support**, create a new support request in the Azure portal.
+## Request an increase for non-adjustable quotas
-1. For **Issue type**, select **Service and subscription limits (quotas)**.
-1. For **Subscription**, select the subscription whose quota you want to increase.
-1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**.
+To request an increase for a non-adjustable quota, such as Virtual Machines or Virtual Machine Scale Sets, you must submit a support request so that a support engineer can assist you.
- :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
+1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**.
+1. From the **Overview** page, select **Compute**.
+1. Find the quota you want to increase, then select the support icon.
-From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
+ :::image type="content" source="media/per-vm-quota-requests/support-icon.png" alt-text="Screenshot showing the support icon in the Azure portal.":::
-## Increase multiple VM-family CPU quotas in one request
+1. In the **New support request form**, on the first page, confirm that the pre-filled information is correct.
+1. For **Quota type**, select **Other Requests**, then select **Next**.
-You can also request multiple increases at the same time (bulk request). Doing a bulk request quota increase may take longer than requesting to increase a single quota.
+ :::image type="content" source="media/per-vm-quota-requests/new-per-vm-quota-request.png" alt-text="Screenshot showing a new quota increase support request in the Azure portal.":::
-To request multiple increases together, first go to the **Usage + quotas** page as described above. Then do the following:
+1. On the **Additional details** page, under **Problem details**, enter the information required for your quota increase, including the new limit requested.
-1. Select **Request Increase** near the top of the screen.
-1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**.
-1. Select **Next** to go to the **Additional details** screen, then select **Enter details**.
-1. In the **Quota details** screen:
+ :::image type="content" source="media/per-vm-quota-requests/quota-request-problem-details.png" alt-text="Screenshot showing the Problem details step of a quota increase request in the Azure portal.":::
- :::image type="content" source="media/resource-manager-core-quotas-request/quota-details-standard-set-vcpu-limit.png" alt-text="Screenshot showing the Quota details screen and selections.":::
+1. Scroll down and complete the form. When finished, select **Next**.
+1. Review your request information and select **Previous** to make changes, or **Create** to submit the request.
- 1. For **Deployment model**, ensure **Resource Manager** is selected.
- 1. For **Locations**, select all regions in which you want to increase quotas.
- 1. For each region you selected, select one or more VM series from the **Quotas** drop-down list.
- 1. For each **VM Series** you selected, enter the new vCPU limit that you want for this subscription.
- 1. When you're finished, select **Save and continue**.
-1. Enter or confirm your contact details, then select **Next**.
-1. Finally, ensure that everything looks correct on the **Review + create** page, then select **Create** to submit your request.
+For more information, see [Create a support request](how-to-create-azure-support-request.md).
## Next steps - Learn more about [vCPU quotas](../../virtual-machines/windows/quotas.md).
+- Learn more in [Quotas overview](quotas-overview.md).
- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-portal Quotas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/quotas-overview.md
+
+ Title: Quotas overview
+description: Learn about to view quotas and request increases in the Azure portal.
Last updated : 05/11/2022+++
+# Quotas overview
+
+Many Azure services have quotas, which are the assigned number of resources for your Azure subscription. Each quota represents a specific countable resource, such as the number of virtual machines you can create, the number of storage accounts you can use concurrently, the number of networking resources you can consume, or the number of API calls to a particular service you can make.
+
+The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide).
+
+## Quotas or limits?
+
+Quotas were previously referred to as limits. Quotas do have limits, but the limits are variable and dependent on many factors. Each subscription has a default value for each quota.
+
+> [!NOTE]
+> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves.
+
+## Adjustable and non-adjustable quotas
+
+Quotas can be adjustable or non-adjustable.
+
+- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://ms.portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas.
+- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions.
+
+## Work with quotas
+
+Different entry points, data views, actions, and programming options are available, depending on your organization and administrator preferences.
+
+| Option | Azure portal | Quota APIs | Support API |
+|||||
+| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://ms.portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. |
+| Availability | All customers | All customers | All customers with unified, premier, professional direct support plansΓÇ»|
+| Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. |
+| Providers supported | All providers | Compute, Machine Learning | All providers |
+
+## Next steps
+
+- Learn more about [viewing quotas in the Azure portal](view-quotas.md).
+- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md).
+- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-portal Regional Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/regional-quota-requests.md
When considering your vCPU needs across regions, keep in mind the following:
- When you request an increase in the vCPU quota for a VM series, Azure increases the regional vCPU quota limit by the same amount. -- When you create a new subscription, the default value for the total number of vCPUs in a region might not be equal to the total default vCPU quota for all individual VM series. This discrepancy can result in a subscription with enough quota for each individual VM series that you want to deploy. However, there might not be enough quota to accommodate the total regional vCPUs for all deployments. In this case, you must submit a request to explicitly increase the quota limit of the regional vCPU quotas.
+- When you create a new subscription, the default value for the total number of vCPUs in a region might not be equal to the total default vCPU quota for all individual VM series. This can result in a subscription without enough quota for each individual VM series that you want to deploy. However, there might not be enough quota to accommodate the total regional vCPUs for all deployments. In this case, you must submit a request to explicitly increase the quota limit of the regional vCPU quotas.
-## Increase a regional vCPU quota
+## Request an increase for regional vCPU quotas
-To request a regional vCPU quota from **Usage + quotas**:
+1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**.
-1. In the Azure portal, search for and select **Subscriptions**.
+ > [!TIP]
+ > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it.
-1. Select the subscription whose quota you want to increase.
+1. On the **Overview** page, select **Compute**.
+1. On the **My quotas** page, select **Region** and then unselect **All**.
+1. In the **Region** list, select the regions you want to include for the quota increase request.
+1. Filter for any other requirements, such as **Usage**, as needed.
+1. Select the quota(s) that you want to increase.
-1. In the left pane, select **Usage + quotas**. Use the filters to view your quota by usage.
+ :::image type="content" source="media/regional-quota-requests/select-regional-quotas.png" alt-text="Screenshot showing regional quota selection in the Azure portal":::
-1. In the main pane, select **Total Regional vCPUs**, then select the pencil icon. The example below shows the regional vCPU quota for the NorthEast US region.
+1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s).
- :::image type="content" source="media/resource-manager-core-quotas-request/regional-quota-total.png" alt-text="Screenshot of the Usage + quotas screen showing Total Regional vCPUs in the Azure portal." lightbox="media/resource-manager-core-quotas-request/regional-quota-total.png":::
+ :::image type="content" source="media/regional-quota-requests/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal.":::
-1. In **Quota details**, enter your new quota limit, then select **Save and continue**.
+ > [!TIP]
+ > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage.
- Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase.
+1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**.
-> [!TIP]
-> You can also request multiple increases at the same time. For more information, see [Increase multiple VM-family CPU quotas in one request](per-vm-quota-requests.md#increase-multiple-vm-family-cpu-quotas-in-one-request).
+ :::image type="content" source="media/regional-quota-requests/regional-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a regional quota increase request.":::
-## Increase a regional quota from Help + support
+1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**.
-To request a standard vCPU quota increase per VM family from **Help + support**, create a new support request in the Azure portal.
+ :::image type="content" source="media/regional-quota-requests/regional-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a regional quota increase request.":::
-1. For **Issue type**, select **Service and subscription limits (quotas)**.
-1. For **Subscription**, select the subscription whose quota you want to increase.
-1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**.
-
- :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
-
-From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
+Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase.
## Next steps
+- Learn more about [vCPU quotas](../../virtual-machines/windows/quotas.md).
+- Learn more in [Quotas overview](quotas-overview.md).
+- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
- Review the [list of Azure regions and their locations](https://azure.microsoft.com/regions/).-- Get an overview of [Azure regions for virtual machines](../../virtual-machines/regions.md) and how to to maximize a VM performance, availability, and redundancy in a given region.
azure-portal Spot Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/spot-quota.md
Title: Increase spot vCPU quotas
+ Title: Request an increase for spot vCPU quotas
description: Learn how to request increases for spot vCPU quotas in the Azure portal. Previously updated : 1/26/2022 Last updated : 05/11/2022
-# Increase spot vCPU quotas
+# Request an increase for spot vCPU quotas
Azure Resource Manager enforces two types of vCPU quotas for virtual machines:
When considering your spot vCPU needs, keep in mind the following:
- At any point in time when Azure needs the capacity back, the Azure infrastructure will evict spot VMs.
-## Increase a spot vCPU quota
+## Request an increase for spot vCPU quotas
-To request a quota increase for a spot vCPU quota:
+1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**.
-1. In the Azure portal, search for and select **Subscriptions**.
-1. Select the subscription whose quota you want to increase.
-1. In the left pane, select **Usage + quotas**.
-1. In the main pane, search for spot and select **Total Regional Spot vCPUs** for the region you want to increase.
-1. In **Quota details**, enter your new quota limit.
+ > [!TIP]
+ > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it.
- The example below requests a new quota limit of 103 for the Spot vCPUs across all VM-family vCPUs in the West US region.
+1. On the **Overview** page, select **Compute**.
+1. On the **My quotas** page, enter "spot" in the **Search** box.
+1. Filter for any other requirements, such as **Usage**, as needed.
+1. Find the quota or quotas you want to increase, and select them.
- :::image type="content" source="media/resource-manager-core-quotas-request/spot-quota.png" alt-text="Screenshot of a spot vCPU quota increase request in the Azure portal." lightbox="media/resource-manager-core-quotas-request/spot-quota.png":::
+ :::image type="content" source="media/spot-quota/select-spot-quotas.png" alt-text="Screenshot showing spot quota selection in the Azure portal":::
-1. Select **Save and continue**.
+1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s).
-Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase.
+ :::image type="content" source="media/spot-quota/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal.":::
-## Increase a spot quota from Help + support
+ > [!TIP]
+ > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage.
-To request a spot vCPU quota increase from **Help + support**, create a new support request in the Azure portal.
+1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**.
-1. For **Issue type**, select **Service and subscription limits (quotas)**.
-1. For **Subscription**, select the subscription whose quota you want to increase.
-1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**.
+ :::image type="content" source="media/spot-quota/spot-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a regional quota increase request.":::
- :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
+1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**.
-From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
+ :::image type="content" source="media/spot-quota/spot-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a regional quota increase request.":::
+
+Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase.
## Next steps -- Learn more about [Azure spot virtual machines](../../virtual-machines/spot-vms.md).
+- Learn more about [Azure virtual machines](../../virtual-machines/spot-vms.md).
+- Learn more in [Quotas overview](quotas-overview.md).
- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-portal View Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/view-quotas.md
Title: View quotas
-description: Learn how to view quotas and request increases in the Azure portal.
+description: Learn how to view quotas in the Azure portal.
Last updated 02/14/2022
The **Quotas** page in the Azure portal is the centralized location where you can view your quotas. **My quotas** provides a comprehensive, customizable view of usage and other quota information so that you can assess quota usage. You can also request quota increases directly from **My quotas**.
-To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box.
+To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**.
> [!TIP]
-> After you've accessed **Quotas**, the service will appear at the top of the Home page in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it.
+> After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it.
## View quota details
-To view detailed information about your quotas, select **My quotas** in the left menu on the **Quotas** page.
+To view detailed information about your quotas, select **My quotas** in the left pane on the **Quotas** page.
> [!NOTE] > You can also select a specific Azure provider from the **Quotas** overview page to view quotas and usage for that provider. If you don't see a provider, check the [Azure subscription and service limits page](../..//azure-resource-manager/management/azure-subscription-service-limits.md) for more information.
-On the **My quotas** page, you can choose which quotas and usage data to display. The filter options at the top of the page let you filter by location, provider, subscription, and usage. You can also use the search box to look for a specific quota.
+On the **My quotas** page, you can choose which quotas and usage data to display. The filter options at the top of the page let you filter by location, provider, subscription, and usage. You can also use the search box to look for a specific quota. Depending on the provider you select, you may see some differences in filters and columns.
:::image type="content" source="media/view-quotas/my-quotas.png" alt-text="Screenshot of the My quotas screen in the Azure portal."::: In the list of quotas, you can toggle the arrow shown next to **Quota** to expand and close categories. You can do the same next to each category to drill down and create a view of the information you need.
-## Request quota increases
-
-You can request quota increases directly from **My quotas**. The process for requesting an increase will depend on the type of quota.
-
-> [!NOTE]
-> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves.
-
-### Request a quota increase
-
-Some quotas display a pencil icon. Select this icon to quickly request an increase for that quota.
--
-After you select the pencil icon, enter the new limit for your request in the **Quota Details** pane, then select **Save and Continue**. After a few minutes, you'll see a status update confirming whether the increase was fulfilled. If you close **Quota details** before the update appears, you can check it later in the Azure Activity Log.
-
-If your request wasn't fulfilled, you can select **Create a support request** so that your request can be evaluated by our support team.
-
-### Create a support request
-
-If the quota displays a support icon rather than a pencil, you'll need to create a support request in order to request the increase.
--
-Selecting the support icon will take you to the **New support request** page, where you can enter details about your new request. A support engineer will then assist you with the quota increase request.
-
-For more information about opening a support request, see [Create an Azure support request](how-to-create-azure-support-request.md).
- ## Next steps -- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).-- Learn about other ways to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md).
+- Learn more in [Quota overview](quotas-overview.md).
+- about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md).
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
description: Describes practices to follow when creating your Bicep files so the
Previously updated : 11/02/2021 Last updated : 05/12/2022 # Best practices for Bicep
For more information about Bicep variables, see [Variables in Bicep](variables.m
* Use lower camel case for names, such as `myVariableName` or `myResource`.
-* The [uniqueString() function](bicep-functions-string.md#uniquestring) is useful for creating globally unique resource names. When you provide the same parameters, it returns the same string every time. Passing in the resource group ID means the string is the same on every deployment to the same resource group, but different when you deploy to different resource groups or subscriptions.
+* The [uniqueString() function](bicep-functions-string.md#uniquestring) is useful for creating unique resource names. When you provide the same parameters, it returns the same string every time. Passing in the resource group ID means the string is the same on every deployment to the same resource group, but different when you deploy to different resource groups or subscriptions.
-* Sometimes the `uniqueString()` function creates strings that start with a number. Some Azure resources, like storage accounts, don't allow their names to start with numbers. This requirement means it's a good idea to use string interpolation to create resource names. You can add a prefix to the unique string.
+* It's a good practice to use template expressions to create resource names, like in this example:
-* It's often a good idea to use template expressions to create resource names. Many Azure resource types have rules about the allowed characters and length of their names. Embedding the creation of resource names in the template means that anyone who uses the template doesn't have to remember to follow these rules themselves.
+ :::code language="bicep" source="~/azure-docs-bicep-samples/samples/best-practices/resource-name-expressions.bicep" highlight="3":::
+
+ Using template expressions to create resource names gives you several benefits:
+
+ * Strings generated by `uniqueString()` aren't meaningful. It's helpful to use a template expression to create a name that includes meaningful information, such as a short descriptor of the project or environment name, as well as a random component to make the name more likely to be unique.
+
+ * The `uniqueString()` function doesn't guarantee globally unique names. By adding additional text to your resource names, you reduce the likelihood of reusing an existing resource name.
+
+ * Sometimes the `uniqueString()` function creates strings that start with a number. Some Azure resources, like storage accounts, don't allow their names to start with numbers. This requirement means it's a good idea to use string interpolation to create resource names. You can add a prefix to the unique string.
+
+ * Many Azure resource types have rules about the allowed characters and length of their names. Embedding the creation of resource names in the template means that anyone who uses the template doesn't have to remember to follow these rules themselves.
* Avoid using `name` in a symbolic name. The symbolic name represents the resource, not the resource's name. For example, instead of the following syntax:
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.ImportExport/jobs | [listBitLockerKeys](/rest/api/storageimportexport/bitlockerkeys/list) | | Microsoft.Kusto/Clusters/Databases | [ListPrincipals](/rest/api/azurerekusto/databases/listprincipals) |
-| Microsoft.LabServices/users | [ListEnvironments](/rest/api/labservices/globalusers/listenvironments) |
-| Microsoft.LabServices/users | [ListLabs](/rest/api/labservices/globalusers/listlabs) |
+| Microsoft.LabServices/labs/users | [list](/rest/api/labservices/users/list-by-lab) |
+| Microsoft.LabServices/labs/virtualMachines | [list](/rest/api/labservices/virtual-machines/list-by-lab) |
| Microsoft.Logic/integrationAccounts/agreements | [listContentCallbackUrl](/rest/api/logic/agreements/listcontentcallbackurl) | | Microsoft.Logic/integrationAccounts/assemblies | [listContentCallbackUrl](/rest/api/logic/integrationaccountassemblies/listcontentcallbackurl) | | Microsoft.Logic/integrationAccounts | [listCallbackUrl](/rest/api/logic/integrationaccounts/getcallbackurl) |
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 04/20/2022 Last updated : 05/13/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | servers / usages | No | No | > | servers / virtualNetworkRules | No | No | > | servers / vulnerabilityAssessments | No | No |
-> | virtualClusters | Yes | Yes |
+> | virtualClusters | No | No |
<a id="sqlnote"></a>
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
Title: Deploy Resource Manager templates by using GitHub Actions description: Describes how to deploy Azure Resource Manager templates (ARM templates) by using GitHub Actions. Previously updated : 02/07/2022 Last updated : 05/10/2022
The file has two sections:
|Section |Tasks | |||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
+|**Authentication** | 1. Generate deployment credentials. |
|**Deploy** | 1. Deploy the Resource Manager template. | ## Generate deployment credentials
+# [Service principal](#tab/userlevel)
+ You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button. Create a resource group if you do not already have one.
In the example above, replace the placeholders with your subscription ID and res
> [!IMPORTANT] > It is always a good practice to grant minimum access. The scope in the previous example is limited to the resource group.
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
++ ## Configure the GitHub secrets
+# [Service principal](#tab/userlevel)
+ You need to create secrets for your Azure credentials, resource group, and subscriptions. 1. In [GitHub](https://github.com/), browse your repository.
You need to create secrets for your Azure credentials, resource group, and subsc
1. Create an additional secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`).
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
++ ## Add Resource Manager template Add a Resource Manager template to your GitHub repository. This template creates a storage account.
The workflow file must be stored in the **.github/workflows** folder at the root
1. Select **set up a workflow yourself**. 1. Rename the workflow file if you prefer a different name other than **main.yml**. For example: **deployStorageAccount.yml**. 1. Replace the content of the yml file with the following:
+ # [Service principal](#tab/userlevel)
- ```yml
+ ```yml
on: [push] name: Azure ARM jobs:
The workflow file must be stored in the **.github/workflows** folder at the root
# output containerName variable from template - run: echo ${{ steps.deploy.outputs.containerName }}
- ```
+ ```
+
+ > [!NOTE]
+ > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`).
+
+ The first section of the workflow file includes:
+
+ - **name**: The name of the workflow.
+ - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yml
+ on: [push]
+ name: Azure ARM
+ jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+
+ # Checkout code
+ - uses: actions/checkout@main
+
+ # Log into Azure
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ # Deploy ARM template
+ - name: Run ARM deploy
+ uses: azure/arm-deploy@v1
+ with:
+ subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }}
+ resourceGroupName: ${{ secrets.AZURE_RG }}
+ template: ./azuredeploy.json
+ parameters: storageAccountType=Standard_LRS
+
+ # output containerName variable from template
+ - run: echo ${{ steps.deploy.outputs.containerName }}
+ ```
- > [!NOTE]
- > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`).
+ > [!NOTE]
+ > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`).
- The first section of the workflow file includes:
+ The first section of the workflow file includes:
- - **name**: The name of the workflow.
- - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
+ - **name**: The name of the workflow.
+ - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
+
1. Select **Start commit**. 1. Select **Commit directly to the main branch**.
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listKeys](/rest/api/eventhub) | | Microsoft.ImportExport/jobs | [listBitLockerKeys](/rest/api/storageimportexport/bitlockerkeys/list) | | Microsoft.Kusto/Clusters/Databases | [ListPrincipals](/rest/api/azurerekusto/databases/listprincipals) |
-| Microsoft.LabServices/users | [ListEnvironments](/rest/api/labservices/globalusers/listenvironments) |
-| Microsoft.LabServices/users | [ListLabs](/rest/api/labservices/globalusers/listlabs) |
+| Microsoft.LabServices/labs/users | [list](/rest/api/labservices/users/list-by-lab) |
+| Microsoft.LabServices/labs/virtualMachines | [list](/rest/api/labservices/virtual-machines/list-by-lab) |
| Microsoft.Logic/integrationAccounts/agreements | [listContentCallbackUrl](/rest/api/logic/agreements/listcontentcallbackurl) | | Microsoft.Logic/integrationAccounts/assemblies | [listContentCallbackUrl](/rest/api/logic/integrationaccountassemblies/listcontentcallbackurl) | | Microsoft.Logic/integrationAccounts | [listCallbackUrl](/rest/api/logic/integrationaccounts/getcallbackurl) |
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
To complete this tutorial, you must have the following prerequisites:
| Setting Name | Suggested Value | Description | | | | -- | | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. |
- | Homepage URL | `http://localhost:5000/home` | |
+ | Homepage URL | `http://localhost:5000` | |
| Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. | | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. |
The last thing you need to do is update the **Homepage URL** and **Authorization
| Setting | Example | | - | - |
- | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net/home` |
+ | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` |
| Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` | 3. Navigate to your web app URL and test the application.
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
+
+ Title: Monitoring Azure Video Indexer data reference #Required; *your official service name*
+description: Important reference material needed when you monitor Azure Video Indexer
+++++ Last updated : 05/10/2022 #Required; mm/dd/yyyy format.+
+<!-- VERSION 2.3
+Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
+
+<!-- IMPORTANT STEP 1. Do a search and replace of Azure Video Indexer with the name of your service. That will make the template easier to read -->
+
+# Monitor Azure Video Indexer data reference
+
+See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure Video Indexer.
+
+## Metrics
+
+Azure Video Indexer currently does not support any monitoring on metrics.
+<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs.
+<!-- Please keep headings in this order -->
+
+<!-- 2 options here depending on the level of extra content you have. -->
+
+<!--**OPTION 1 EXAMPLE**
+
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+-->
+
+<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
+
+<!--This section lists all the automatically collected platform metrics collected for Azure Video Indexer.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Virtual Machine | [Microsoft.Compute/virtualMachine](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines) |
+| Virtual machine scale set | [Microsoft.Compute/virtualMachinescaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescaleset)
+
+--**OPTION 2 EXAMPLE** -
+
+<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](https://docs.microsoft.com/azure/cosmos-db/monitor-cosmos-db-reference#metrics). They even regroup the metrics into usage type vs. resource provider and type.
+-->
+
+<!-- Example format. Mimic the setup of metrics supported, but add extra information -->
+
+<!--### Virtual Machine metrics
+
+Resource Provider and Type: [Microsoft.Compute/virtualMachines](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines)
+
+| Metric | Unit | Description | *TODO replace this label with other information* |
+|:-|:--|:|:|
+| | | | Use this metric for <!-- put your specific information in here -->
+<!--| | | | |
+
+<!--### Virtual machine scale set metrics
+
+Namespace- [Microsoft.Compute/virtualMachinesscaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescalesets)
+
+| Metric | Unit | Description | *TODO replace this label with other information* |
+|:-|:--|:|:|
+| | | | Use this metric for <!-- put your specific information in here -->
+<!--| | | | |
+
+<!-- Add additional explanation of reference information as needed here. Link to other articles such as your Monitor [servicename] article as appropriate. -->
+
+<!-- Keep this text as-is -->
+For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+## Metric dimensions
+
+Azure Video Indexer currently does not support any monitoring on metrics.
+<!-- REQUIRED. Please keep headings in this order -->
+<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
+
+<!--For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+Azure Video Indexer does not have any metrics that contain dimensions.
+
+*OR*
+
+Azure Video Indexer has the following dimensions associated with its metrics.
+
+<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
+
+<!--**--EXAMPLE format when you have dimensions**
+
+Azure Storage supports following dimensions for metrics in Azure Monitor.
+
+| Dimension Name | Description |
+| - | -- |
+| **BlobType** | The type of blob for Blob metrics only. The supported values are **BlockBlob**, **PageBlob**, and **Azure Data Lake Storage**. Append blobs are included in **BlockBlob**. |
+| **BlobTier** | Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. See more in [Azure Storage blob tier](/azure/storage/blobs/storage-blob-storage-tiers). The supported values include: <br/> <li>**Hot**: Hot tier</li> <li>**Cool**: Cool tier</li> <li>**Archive**: Archive tier</li> <li>**Premium**: Premium tier for block blob</li> <li>**P4/P6/P10/P15/P20/P30/P40/P50/P60**: Tier types for premium page blob</li> <li>**Standard**: Tier type for standard page Blob</li> <li>**Untiered**: Tier type for general purpose v1 storage account</li> |
+| **GeoType** | Transaction from Primary or Secondary cluster. The available values include **Primary** and **Secondary**. It applies to Read Access Geo Redundant Storage(RA-GRS) when reading objects from secondary tenant. | -->
+
+## Resource logs
+<!-- REQUIRED. Please keep headings in this order -->
+
+This section lists the types of resource logs you can collect for Azure Video Indexer.
+
+<!-- List all the resource log types you can have and what they are for -->
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+<!--**OPTION 1 EXAMPLE**
+
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+-->
+
+<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
+
+<!--This section lists all the resource log category types collected for Azure Video Indexer.
+
+|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Web Sites | [Microsoft.web/sites](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites) |
+| Web Site Slots | [Microsoft.web/sites/slots](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsitesslots)
+
+--**OPTION 2 EXAMPLE** -
+
+<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article.
+-->
+
+<!-- Example format. Add extra information -->
+
+<!--### Web Sites
+
+Resource Provider and Type: [Microsoft.videoindexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites)
+
+| Category | Display Name | *TODO replace this label with other information* |
+|:|:-||
+| AppServiceAppLogs | App Service Application Logs | *TODO other important information about this type* |
+| AppServiceAuditLogs | Access Audit Logs | *TODO other important information about this type* |
+| etc. | | | -->
+
+### Azure Video Indexer
+
+Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftvideoindexeraccounts)
+
+| Category | Display Name | Additional information |
+|:|:-||
+| VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. |
+
+<!-- --**END Examples** - -->
+
+## Azure Monitor Logs tables
+<!-- REQUIRED. Please keep heading in this order -->
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Video Indexer and available for query by Log Analytics.
+
+<!--**OPTION 1 EXAMPLE**
+
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+-->
+
+<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
+
+|Resource Type | Notes |
+|-|--|
+| [Azure Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | |
+
+<!-**OPTION 2 EXAMPLE** -
+
+<!-- OPTION 2 - List out your tables adding additional information on what each table is for. Individually link to each table using the table name. For example, link to [AzureMetrics](https://docs.microsoft.com/azure/azure-monitor/reference/tables/azuremetrics).
+
+NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the automatically generated list. You can group these sections however you want provided you include the proper links back to the proper tables.
+-->
+
+### Azure Video Indexer
+
+| Table | Description | Additional information |
+|:|:-||
+| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using Azure Video Indexer [portal](https://aka.ms/VIportal) or [REST API](https://aka.ms/vi-dev-portal). | |
+<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link -->
+<!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type |
+| etc. | | |
+
+<!--### Virtual Machine Scale Sets
+
+| Table | Description | *TODO replace this label with other information* |
+|:|:-||
+| [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation) | <!-- description copied from previous link -->
+<!-- Recommendations generated by AD assessments that are started through a scheduled task. When you schedule the assessment it runs by default every 7 days and upload the data into Azure Log Analytics | *TODO other important information about this type |
+| [ADReplicationResult](/azure/azure-monitor/reference/tables/adreplicationresult) | <!-- description copied from previous link -->
+<!--The AD Replication Status solution regularly monitors your Active Directory environment for any replication failures. | *TODO other important information about this type |
+| etc. | | |
+
+<!-- Add extra information if required -->
+
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+<!-- --**END EXAMPLES** -
+
+### Diagnostics tables
+<!-- REQUIRED. Please keep heading in this order -->
+<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future will have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
+
+<!-- Azure Video Indexer uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table and the [TODO whatever additional] table to store resource log information. The following columns are relevant.
+
+**Azure Diagnostics**
+
+| Property | Description |
+|: |:|
+| | |
+| | |
+
+**[TODO Service-specific table]**
+
+| Property | Description |
+|: |:|
+| | |
+| | |-->
+
+## Activity log
+<!-- REQUIRED. Please keep heading in this order -->
+
+The following table lists the operations related to Azure Video Indexer that may be created in the Activity log.
+
+<!-- Fill in the table with the operations that can be created in the Activity log for the service. -->
+| Operation | Description |
+|:|:|
+|Generate_AccessToken | |
+|Accounts_Update | |
+|Write tags | |
+|Create or update resource diagnostic setting| |
+|Delete resource diagnostic setting|
+
+<!-- NOTE: This information may be hard to find or not listed anywhere. Please ask your PM for at least an incomplete list of what type of messages could be written here. If you can't locate this, contact azmondocs@microsoft.com for help -->
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## Schemas
+<!-- REQUIRED. Please keep heading in this order -->
+
+The following schemas are in use by Azure Video Indexer
+
+<!-- List the schema and their usage. This can be for resource logs, alerts, event hub formats, etc depending on what you think is important. -->
+
+```json
+{
+ "time": "2022-03-22T10:59:39.5596929Z",
+ "resourceId": "/SUBSCRIPTIONS/602a61eb-c111-43c0-8323-74825230a47d/RESOURCEGROUPS/VI-RESOURCEGROUP/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/VIDEOINDEXERACCOUNT",
+ "operationName": "Get-Video-Thumbnail",
+ "category": "Audit",
+ "location": "westus2",
+ "durationMs": "192",
+ "resultSignature": "200",
+ "resultType": "Success",
+ "resultDescription": "Get Video Thumbnail",
+ "correlationId": "33473fc3-bcbc-4d47-84cc-9fba2f3e9faa",
+ "callerIpAddress": "46.*****",
+ "operationVersion": "Operations",
+ "identity": {
+ "externalUserId": "4704F34286364F2*****",
+ "upn": "alias@outlook.com",
+ "claims": { "permission": "Reader", "scope": "Account" }
+ },
+ "properties": {
+ "accountName": "videoIndexerAccoount",
+ "accountId": "8878b584-d8a0-4752-908c-00d6e5597f55",
+ "videoId": "1e2ddfdd77"
+ }
+ }
+ ```
+
+## Next steps
+
+<!-- replace below with the proper link to your main monitoring service article -->
+- See [Monitoring Azure Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
+
+ Title: Monitoring Azure Video Indexer #Required; Must be "Monitoring *Azure Video Indexer*
+description: Start here to learn how to monitor Azure Video Indexer #Required;
+++++ Last updated : 05/10/2022 #Required; mm/dd/yyyy format.++
+<!-- VERSION 2.2
+Template for the main monitoring article for Azure services.
+Keep the required sections and add/modify any content for any information specific to your service.
+This article should be in your TOC with the name *monitor-Azure Video Indexer.md* and the TOC title "Monitor Azure Video Indexer".
+Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-Azure Video Indexer-reference.md* and the TOC title "Monitoring data".
+Keep the headings in this order.
+-->
+
+<!-- IMPORTANT STEP 1. Do a search and replace of Azure Video Indexer with the name of your service. That will make the template easier to read -->
+
+# Monitoring Azure Video Indexer
+<!-- REQUIRED. Please keep headings in this order -->
+<!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure Video Indexer. Azure Video Indexer uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
++
+<!-- Optional diagram showing monitoring for your service. -->
+
+<!--## Monitoring overview page in Azure portal
+<!-- OPTIONAL. Please keep headings in this order -->
+<!-- If you don't have an over page, remove this section. If you keep it, edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+
+<!--The **Overview** page in the Azure portal for each *Azure Video Indexer account* includes *[provide a description of the data in the Overview page.]*.
++
+## *Azure Video Indexer* insights
+
+<!-- OPTIONAL SECTION. Only include if your service has an "insight" associated with it. Examples of insights include
+ - CosmosDB https://docs.microsoft.com/azure/azure-monitor/insights/cosmosdb-insights-overview
+ - If you still aren't sure, contact azmondocs@microsoft.com.>
+-->
+
+Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+
+<!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details -->
+
+## Monitoring data
+
+<!-- REQUIRED. Please keep headings in this order -->
+Azure Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *Azure Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Video Indexer.
+
+<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. -->
+
+## Collection and routing
+
+<!-- REQUIRED. Please keep headings in this order -->
+
+<!-- Platform metrics and the -->Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Video Indexer* are listed in [Azure Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs).
+
+| Category | Description |
+|:|:|
+|Audit | Read/Write operations|
++
+<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+Currently Azure Video Indexer does not support monitoring of metrics.
+<!-- REQUIRED. Please keep headings in this order
+If you don't support metrics, say so. Some services may be only onboarded to logs -->
+
+<!--You can analyze metrics for *Azure Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+
+<!-- Point to the list of metrics available in your monitor-service-reference article. -->
+<!--For a list of the platform metrics collected for Azure Video Indexer, see [Monitoring *Azure Video Indexer* data reference metrics](monitor-service-reference.md#metrics)
+
+<!-- REQUIRED for services that use a Guest OS. That includes agent based services like Virtual Machines, Service Fabric, Cloud Services, and perhaps others. Delete the section otherwise -->
+<!--Guest OS metrics must be collected by agents running on the virtual machines hosting your service. <!-- Add additional information as appropriate -->
+<!--For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/platform/agents-overview)
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+
+<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+
+## Analyzing logs
+
+<!-- REQUIRED. Please keep headings in this order
+If you don't support resource logs, say so. Some services may be only onboarded to metrics and the activity log. -->
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for Azure Video Indexer resource logs is found in the [Azure Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Video Indexer, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
+
+<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+
+### Sample Kusto queries
+
+<!-- REQUIRED if you support logs. Please keep headings in this order -->
+<!-- Add sample Log Analytics Kusto queries for your service. -->
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+
+<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries may be in the Log Analytics UI (sample or example queries). Check if so. -->
+
+Following are queries that you can use to help you monitor your Azure Video Indexer account.
+<!-- Put in a code section here. -->
+
+```kusto
+// Project failures summarized by operationName and Upn, aggregated in 30m windows.
+VIAudit
+| where Status == "Failure"
+| summarize count() by OperationName, bin(TimeGenerated, 30m), Upn
+| render timechart
+```
+
+```kusto
+// Project failures with detailed error message.
+VIAudit
+| where Status == "Failure"
+| parse Description with "ErrorType: " ErrorType ". Message: " ErrorMessage ". Trace" *
+| project TimeGenerated, OperationName, ErrorMessage, ErrorType, CorrelationId, _ResourceId
+```
+
+## Alerts
+
+<!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know.
+This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive
+-->
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+
+<!-- only include next line if applications run on your service and work with App Insights. -->
+<!-- If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+<!-- end -->
+
+The following table lists common and recommended alert rules for Azure Video Indexer.
+
+<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable -->
+| Alert type | Condition | Description |
+|:|:|:|
+| Log Alert|Failed operation |Send an alert when an upload failed |
+
+```kusto
+//All failed uploads, aggregated in one hour window.
+VIAudit
+| where OperationName == "Upload-Video" and Status == "Failure"
+| summarize count() by bin(TimeGenerated, 1h)
+```
+
+## Next steps
+
+<!-- Add additional links. You can change the wording of these and add more if useful. -->
+
+- See [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Video Indexer account.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Improved line break logic to better split transcript into sentences. New editing
Azure Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure Video Indexer]().
+The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure Video Indexer](monitor-video-indexer.md).
### Video Insights improvements
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 06/02/2021 Last updated : 05/13/2022 # Troubleshooting backup failures on Azure virtual machines
This section covers backup operation failure of Azure Virtual machine.
* Verify that the VM has internet connectivity. * Make sure another backup service isn't running. * From `Services.msc`, ensure the **Windows Azure Guest Agent** service is **Running**. If the **Windows Azure Guest Agent** service is missing, install it from [Back up Azure VMs in a Recovery Services vault](./backup-azure-arm-vms-prepare.md#install-the-vm-agent).
-* The **Event log** may show backup failures that are from other backup products, for example, Windows Server backup, and aren't due to Azure Backup. Use the following steps to determine whether the issue is with Azure Backup:
+* The **Event log** may show backup failures that are from other backup products, for example, Windows Server backup aren't happening due to Azure Backup. Use the following steps to determine whether the issue is with Azure Backup:
* If there's an error with the entry **Backup** in the event source or message, check whether Azure IaaS VM Backup backups were successful, and whether a Restore Point was created with the desired snapshot type. * If Azure Backup is working, then the issue is likely with another backup solution. * Here is an example of an Event Viewer error 517 where Azure Backup was working fine but "Windows Server Backup" was failing: ![Windows Server Backup failing](media/backup-azure-vms-troubleshoot/windows-server-backup-failing.png)
- * If Azure Backup is failing, then look for the corresponding Error Code in the section Common VM backup errors in this article.
+ * If Azure Backup is failing, then look for the corresponding error code in the [Common issues](#common-issues) section.
* If you see Azure Backup option greyed out on an Azure VM, hover over the disabled menu to find the reason. The reasons could be "Not available with EphemeralDisk" or "Not available with Ultra Disk". ![Reasons for the disablement of Azure Backup option](media/backup-azure-vms-troubleshoot/azure-backup-disable-reasons.png)
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| Backup Management type | Supported | Supported Regions | | - | | -- |
-| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA and UG Virginia. |
-| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central, UG IOWA, and UG Virginia. |
+| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. |
+| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. |
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md
foreach ($softitem in $containerSoftDelete)
{ Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state }
-#Invoking API to disable enhanced security
-$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
-$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
-$accesstoken = Get-AzAccessToken
-$token = $accesstoken.Token
-$authHeader = @{
- 'Content-Type'='application/json'
- 'Authorization'='Bearer ' + $token
-}
-$body = @{properties=@{enhancedSecurityState= "Disabled"}}
-$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13' #Replace "management.azure.com" with "management.usgovcloudapi.net" if your subscription is in USGov.
-$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH
+#Invoking API to disable Security features (Enhanced Security) to remove MARS/MAB/DPM servers.
+Set-AzRecoveryServicesVaultProperty -VaultId $VaultToDelete.ID -DisableHybridBackupSecurityFeature $true
+Write-Host "Disabled Security features for the vault"
#Fetch all protected items and servers $backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
Remove-AzRecoveryServicesVault -Vault $VaultToDelete
## Next steps
-[Learn more](../backup-azure-delete-vault.md) about vault deletion process.
+[Learn more](../backup-azure-delete-vault.md) about vault deletion process.
cognitive-services Howtoanalyzevideo_Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowtoAnalyzeVideo_Vision.md
Last updated 09/09/2019 ms.devlang: csharp-+ # Analyze videos in near real time
cognitive-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/deploy-computer-vision-on-premises.md
Last updated 05/09/2022 -+ # Use Computer Vision container with Kubernetes and Helm
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
Last updated 09/28/2021 -+ # Migrate to the Read v3.x OCR containers
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
Last updated 06/08/2021 -+ # Telemetry and troubleshooting
cognitive-services Upgrade Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/upgrade-api-versions.md
Last updated 08/11/2020 -+ # Upgrade from Read v2.x to Read v3.x
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest.md
Last updated 08/28/2020 -+ #Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-delete-data.md
Last updated 03/21/2019 -+ # View or delete user data in Custom Vision
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
Last updated 06/25/2021 -+ # Integrate Azure storage for notifications and backup
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
Last updated 02/22/2021 ms.devlang: csharp-+ # Migrate your face data to a different Face subscription
cognitive-services How To Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md
Last updated 1/5/2021 ms.devlang: csharp-+ # How to: mitigate latency when using the Face service
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/use-persondirectory.md
Last updated 04/22/2021 ms.devlang: csharp-+ # Use the PersonDirectory structure
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/encrypt-data-at-rest.md
Last updated 08/28/2020 -+ #Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Previously updated : 01/24/2022 Last updated : 05/13/2022
[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-## Prerequisites
-
-Before you can begin using [Speech Studio](https://speech.microsoft.com), you need to have an Azure account and a Speech resource. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource).
-
-After you've created an Azure account and a Speech service resource, do the following:
-
-1. Sign in to [Speech Studio](https://speech.microsoft.com) with your Azure account.
-1. In your Speech Studio subscription, select a Speech resource. You can change the resource at any time by selecting **Settings** at the top of the pane.
- ## Speech Studio features In Speech Studio, the following Speech service features are available as project types:
cognitive-services Data Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/data-filtering.md
Title: Data Filtering - Custom Translator
+ Title: "Legacy: Data Filtering - Custom Translator"
description: When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps to prepare for training.
# Data filtering
-When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps to prepare for training. These steps are explained here. The knowledge of the filtering may help you understand the sentence count displayed in Custom Translator as well as the steps you may take yourself to prepare the documents for training with Custom Translator.
+When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps to prepare for training. These steps are explained here. The knowledge of the filtering may help you understand the sentence count displayed in Custom Translator and the steps you may take yourself to prepare the documents for training with Custom Translator.
## Sentence alignment+ If your document isn't in XLIFF, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence by sentence. Custom Translator doesn't perform document alignment – it follows your naming of the documents to find the matching document of the other language. Within the document, Custom Translator tries to find the corresponding sentence in the other language. It uses document markup like embedded HTML tags to help with the alignment. If you see a large discrepancy between the number of sentences in the source and target side documents, your document may not have been parallel in the first place, or for other reasons couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously. - ## Deduplication+ Custom Translator removes the sentences that are present in test and tuning documents from training data. The removal happens dynamically inside of the training run, not in the data processing step. Custom Translator reports the sentence count to you in the project overview before such removal. ## Length filter+ * Remove sentences with only one word on either side. * Remove sentences with more than 100 words on either side.  Chinese, Japanese, Korean are exempt.
-* Remove sentences with fewer than 3 characters. Chinese, Japanese, Korean are exempt.
+* Remove sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
* Remove sentences with more than 2000 characters for Chinese, Japanese, Korean. * Remove sentences with less than 1% alpha characters. * Remove dictionary entries containing more than 50 words.
cognitive-services Document Formats Naming Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/document-formats-naming-convention.md
Title: Document formats and naming conventions - Custom Translator
+ Title: "Legacy: Document formats and naming conventions - Custom Translator"
description: This article is a guide to document formats and naming conventions in Custom Translator to avoid naming conflicts.
This table includes all supported file formats that you can use to build your tr
| Adobe Acrobat | .PDF | Adobe Acrobat portable document | | HTML | .HTML, .HTM | HTML document | | Text file | .TXT | UTF-16 or UTF-8 encoded text files. The file name must not contain Japanese characters. |
-| Aligned text file | .ALIGN | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator will not align the sentences for you. |
+| Aligned text file | .ALIGN | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator won't align the sentences for you. |
| Excel file | .XLSX | Excel file (2013 or later). First line/ row of the spreadsheet should be language code. | ## Dictionary formats
-For dictionaries, Custom Translator supports all file formats that are supported for training sets. If you are using an Excel dictionary, the first line/ row of the spreadsheet should be language codes.
+For dictionaries, Custom Translator supports all file formats that are supported for training sets. If you're using an Excel dictionary, the first line/ row of the spreadsheet should be language codes.
## Zip file formats
where {document name} is the name of your document, {language code} is the ISO L
For example, to upload two parallel documents within a zip for an English to Spanish system, the files should be named "data_en" and "data_es".
-Translation Memory files (TMX, XLF, XLIFF, LCL, XLSX) are not required to follow the specific language-naming convention.
+Translation Memory files (TMX, XLF, XLIFF, LCL, XLSX) aren't required to follow the specific language-naming convention.
## Next steps
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
Title: Frequently asked questions - Custom Translator
+ Title: "Legacy: Frequently asked questions - Custom Translator"
description: This article contains answers to frequently asked questions about the Azure Cognitive Services Custom Translator.
This article contains answers to frequently asked questions about [Custom Transl
There are restrictions and limits with respect to file size, model training, and model deployment. Keep these restrictions in mind when setting up your training to build a model in Custom Translator. - Submitted files must be less than 100 MB in size.-- Monolingual data is not supported.
+- Monolingual data isn't supported.
## When should I request deployment for a translation system that has been trained?
-It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results are not satisfactory. You should
+It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results aren't satisfactory. You should
be strict and careful in designing your tuning set and your test set, to be fully representative of the terminology and style of material you want to translate. You can be more liberal in composing your training data, and
-experiment with different options. Request a system deployment when you are
+experiment with different options. Request a system deployment when you're
satisfied with the translations in your system test results, have no more data to add to the training to improve your trained system, and you want to access the trained model via APIs.
an option to skip Custom Translator's sentence breaking and alignment process fo
files that are perfectly aligned, and need no further processing. We recommend using `.align` extension only for files that are perfectly aligned.
-If the number of extracted sentences does not match the two files with the same
+If the number of extracted sentences doesn't match the two files with the same
base name, Custom Translator will still run the sentence aligner on `.align` files.
cognitive-services How To Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-create-project.md
Title: How to create a project - Custom Translator
+ Title: "Legacy: How to create a project - Custom Translator"
description: This article explains how to create and manage a project in the Azure Cognitive Services Custom Translator.
Creating project is the first step toward building a model.
use a label *only* if you're planning to build multiple projects for the same language pair and same category and want to access these projects with a different CategoryID. Don't use this field if you're
- building systems for one category only. A project label is not required
+ building systems for one category only. A project label isn't required
and not helpful to distinguish between language pairs. You can use the same label for multiple projects.
Creating project is the first step toward building a model.
The Custom Translator landing page shows the first 10 projects in your workspace. It displays the project name, language pair, category, status, and BLEU score.
-After selecting a project, you'll see the following on the project page:
+After selecting a project, you'll see the following text on the project page:
- CategoryID: A CategoryID is created by concatenating the WorkspaceID, project label, and category code. You use the CategoryID with the Text
cognitive-services How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-manage-settings.md
Title: How to manage settings? - Custom Translator
+ Title: "Legacy: How to manage settings? - Custom Translator"
description: How to manage settings, create workspace, share workspace, and manage key in Custom Translator.
In Custom Translator you can share your workspace with others, if different part
![Share workspace dialog](media/how-to/share-workspace-dialog.png)
-4. If your workspace still has the default name "My workspace", you will be required to change it before sharing your workspace.
+4. If your workspace still has the default name "My workspace", you'll be required to change it before sharing your workspace.
5. Select **Save**. ## Sharing permissions
In Custom Translator you can share your workspace with others, if different part
When a workspace is shared, the **Sharing settings** section shows all email addresses that this workspace is shared with. You can change existing sharing permission for each email address if you have owner access to the workspace.
-1. In the **Sharing settings** section, for each email a dropdown menu shows the current permission level.
+1. In the **Sharing settings** section, for each email, a dropdown menu shows the current permission level.
2. Choose the dropdown menu and select the new permission level you want to assign to that email address.
cognitive-services How To Search Edit Delete Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-search-edit-delete-projects.md
Title: How to search, edit, and delete project - Custom Translator
+ Title: "Legacy: How to search, edit, and delete project - Custom Translator"
description: Custom Translator provides various ways to manage your projects in efficient manner. You can create multiple projects, search based on your criteria, edit your projects. Deleting a project is also possible in Custom Translator.
The filter tool allows you to search projects by different filter conditions. It
## Edit a project
-Custom Translator gives you the ability to edit the name and description of a project. Other project metadata like the category, source language, and target language are not available for edit. The steps below describe how to edit a project.
+Custom Translator gives you the ability to edit the name and description of a project. Other project metadata like the category, source language, and target language aren't available for edit. The steps below describe how to edit a project.
1. Select the **pencil icon** that appears when you hover over a project. ![Edit project](media/how-to/how-to-edit-project.png)
-2. In the dialog, you can modify the project name, the description of the project, the category description, and the project label if no model is deployed. You cannot modify the category or language pair once the project is created.
+2. In the dialog, you can modify the project name, the description of the project, the category description, and the project label if no model is deployed. You can't modify the category or language pair once the project is created.
![Edit project dialog](media/how-to/how-to-edit-project-dialog.png)
You can delete a project when you no longer need it. Make sure the project doesn
![Delete project](media/how-to/how-to-delete-project.png)
-2. Confirm deletion. Deleting a project will delete all models that were created within that project. Deleting project will not affect your documents.
+2. Confirm deletion. Deleting a project will delete all models that were created within that project. Deleting project won't affect your documents.
![Delete confirmation dialog](media/how-to/how-to-delete-project-confirm.png)
cognitive-services How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md
Title: Train a model - Custom Translator
+ Title: "Legacy: Train a model - Custom Translator"
description: How to train and build a custom translation model.
# Train a model
-Training a model is the first and most important step to building a translation model, otherwise, model can't be built. Training happens based on documents you select for the trainings. When you select documents of "Training" document type, be mindful of the 10,000 parallel sentences minimum requirement. As you select documents, we display the total number of training sentences to guide you. This requirement does not apply when you only select documents of dictionary document type to train a model.
+Training a model is the first and most important step to building a translation model, otherwise, model can't be built. Training happens based on documents you select for the trainings. When you select documents of "Training" document type, be mindful of the 10,000 parallel sentences minimum requirement. As you select documents, we display the total number of training sentences to guide you. This requirement doesn't apply when you only select documents of dictionary document type to train a model.
To train a model:
To train a model:
- Document name: Name of the document.
- - Pairing: If this document is a parallel or monolingual document. Monolingual documents are currently not supported for training.
+ - Pairing: Is this document a parallel or monolingual document? Monolingual documents are currently not supported for training.
- Document type: Can be training, tuning, testing, or dictionary.
To train a model:
3. Select **Create model** button.
-4. On the dialog, specify the name for your model. By default, "Train immediately" is selected to start the training pipeline when you select the **Create model** button. You can select **Save as draft** to create the model metadata and put the model in a draft state but model training would not start. At a later time, you have to manually select models in draft state to train.
+4. On the dialog, specify the name for your model. By default, "Train immediately" is selected to start the training pipeline when you select the **Create model** button. You can select **Save as draft** to create the model metadata and put the model in a draft state but model training wouldn't start. At a later time, you've to manually select models in draft state to train.
5. Select the **Create model** button.
cognitive-services How To Upload Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-upload-document.md
Title: How to upload a document - Custom Translator
+ Title: "Legacy: How to upload a document - Custom Translator"
description: The document upload feature uploads parallel documents (two documents where one is the origin and the other is the translation) into the service.
In upload history page you can view history of all document uploads details like
![Upload history tab](media/how-to/how-to-upload-history-1.png) 2. This page shows the status of all of your past uploads. It displays
- uploads from most recent to least recent. For each upload, it shows the document name, upload status, the upload date, the number of files uploaded, type of file uploaded, and the language pair of the file.
+ uploads from most recent to least recent. For each upload, it shows the document name, upload status, upload date, number of files uploaded, type of file uploaded, and language pair of the file.
![Upload history page](media/how-to/how-to-document-history-2.png) 3. Select any upload history record. In upload history details page,
- you can view the files uploaded as part of the upload, uploaded status of the file, language of the file and error message (if there is any error in upload).
+ you can view the uploaded files, upload status of the file, file language, and error messages.
## Next steps
cognitive-services How To View Document Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-document-details.md
Title: Document details - Custom Translator
+ Title: "Legacy: Document details - Custom Translator"
description: The document list page shows the first 10 document in your workspace. For each of the documents, it displays the name, pairing, type, language, upload time stamp, and the email address of the user who uploaded the document.
Select an individual document to view the document details page. The document de
## Delete a document
-User must be a workspace owner to delete document to delete a document. Additionally, if a document is in use by a model, that is in any part of the training process or any part of the deployment process, the document can't be deleted.
+User must be a workspace owner to delete document to delete a document. Additionally, if a document is in use by a model, it can't be deleted.
1. Go to document page 2. Hover on any document record and select the **trash bin** icon.
cognitive-services How To View Model Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-model-details.md
Title: View model details - Custom Translator
+ Title: "Legacy: View the model details - Custom Translator"
description: Models tab under any project shows details of each model such as model name, model status, BLEU score, training, tuning, testing sentence count.
Last updated 12/06/2021
-#Customer intent: As a Custom Translator user, I want to understand how to view model details, so that I can review details of each translation model.
+#Customer intent: As a Custom Translator user, I want to understand how to view the model details, so that I can review details of each translation model.
# View model details
For each model in the project, these details are displayed.
1. Model Name: Shows the model name of a given model. 2. Status: Shows status of a given model. Your new training will have a status
- of Submitted until it is accepted. The status will change to Data processing
+ of Submitted until it's accepted. The status will change to Data processing
while the service evaluates the content of your documents. When the
- evaluation of your documents is complete the status will change to Running
- and you will be able the see the number of sentences that are part of the
+ evaluation of your documents is complete, the status will change to Running.
+ You'll be able the see the number of sentences that are part of the
training, including the tuning and testing sets that are created for you automatically. Below is a list of model status that describes state of the models.
cognitive-services How To View System Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md
Title: View system test results and deployment - Custom Translator
+ Title: "Legacy: View system test results and deployment - Custom Translator"
description: When your training is successful, review system tests to analyze your training results. If you're satisfied with the training results, place a deployment request for the trained model.
cognitive-services Quickstart Build Deploy Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart-build-deploy-custom-model.md
Title: "Quickstart: Build, deploy, and use a custom model"
+ Title: "Legacy: Quickstart - Build, deploy, and use a custom model"
description: A step-by-step guide to building a translation system using the Custom Translator Legacy.
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/release-notes.md
Title: Release notes - Custom Translator
+ Title: "Legacy: Release notes - Custom Translator"
description: Custom Translator releases, improvements, bug fixes, and known issues.
This page has the latest release notes for features, improvements, bug fixes, an
| Source Language | Target Language | |-|--|
-| Arabic (ar) | English (en-us)|
-| Brazilian Portuguese (pt) | English (en-us)|
-| Bulgarian (bg) | English (en-us)|
-| Chinese Simplified (zh-Hans) | English (en-us)|
-| Chinese Traditional (zh-Hant) | English (en-us)|
-| Croatian (hr) | English (en-us)|
-| Czech (cs) | English (en-us)|
-| Danish (da) | English (en-us)|
-| Dutch (nl) | English (en-us)|
-| English (en-us) | Arabic (ar)|
-| English (en-us) | Bulgarian (bg)|
-| English (en-us) | Chinese Simplified (zh-Hans|
-| English (en-us) | Chinese Traditional (zh-Hant|
-| English (en-us) | Czech (cs)|
-| English (en-us) | Danish (da)|
-| English (en-us) | Dutch (nl)|
-| English (en-us) | Estonian (et)|
-| English (en-us) | Fijian (fj)|
-| English (en-us) | Finnish (fi)|
-| English (en-us) | French|
-| English (en-us) | Greek (el)|
-| English (en-us) | Hindi|
-| English (en-us) | Hungarian (hu)|
-| English (en-us) | Icelandic (is)|
-| English (en-us) | Indonesian (id)|
-| English (en-us) | Inuktitut (iu)|
-| English (en-us) | Irish (ga)|
-| English (en-us) | Italian (it)|
-| English (en-us) | Japanese (ja)|
-| English (en-us) | Korean (ko)|
-| English (en-us) | Lithuanian (lt)|
-| English (en-us) | Norwegian (nb)|
-| English (en-us) | Polish (pl)|
-| English (en-us) | Romanian (ro)|
-| English (en-us) | Samoan|
-| English (en-us) | Slovak (sk)|
-| English (en-us) | Spanish (es)|
-| English (en-us) | Swedish (sv)|
-| English (en-us) | Tahitian (ty)|
-| English (en-us) | Thai (th)|
-| English (en-us) | Tongan (to)|
-| English (en-us) | Turkish (tr)|
-| English (en-us) | Ukrainian|
-| English (en-us) | Welsh (cy)|
-| Estonian (et) | English (en-us)|
-| Fijian | English (en-us)|
-| Finnish (fi) | English (en-us)|
-| German (de) | English (en-us)|
-| Greek (el) | English (en-us)|
-| Hungarian (hu) | English (en-us)|
-| Icelandic (is) | English (en-us)|
-| Indonesian (id) | English (en-us)
-| Inuktitut (iu) | English (en-us)|
-| Irish (ga) | English (en-us)|
-| Italian (it) | English (en-us)|
-| Japanese (ja) | English (en-us)|
-| Kazakh (kk) | English (en-us)|
-| Korean (ko) | English (en-us)|
-| Lithuanian (lt) | English (en-us)|
-| Malagasy (mg) | English (en-us)|
-| Maori (mi) | English (en-us)|
-| Norwegian (nb) | English (en-us)|
-| Persian (fa) | English (en-us)|
-| Polish (pl) | English (en-us)|
-| Romanian (ro) | English (en-us)|
-| Russian (ru) | English (en-us)|
-| Slovak (sk) | English (en-us)|
-| Spanish (es) | English (en-us)|
-| Swedish (sv) | English (en-us)|
-| Tahitian (ty) | English (en-us)|
-| Thai (th) | English (en-us)|
-| Tongan (to) | English (en-us)|
-| Turkish (tr) | English (en-us)|
-| Vietnamese (vi) | English (en-us)|
-| Welsh (cy) | English (en-us)|
+| Arabic (`ar`) | English (`en-us`)|
+| Brazilian Portuguese (`pt`) | English (`en-us`)|
+| Bulgarian (`bg`) | English (`en-us`)|
+| Chinese Simplified (`zh-Hans`) | English (`en-us`)|
+| Chinese Traditional (`zh-Hant`) | English (`en-us`)|
+| Croatian (`hr`) | English (`en-us`)|
+| Czech (`cs`) | English (`en-us`)|
+| Danish (`da`) | English (`en-us`)|
+| Dutch (nl) | English (`en-us`)|
+| English (`en-us`) | Arabic (`ar`)|
+| English (`en-us`) | Bulgarian (`bg`)|
+| English (`en-us`) | Chinese Simplified (`zh-Hans`|
+| English (`en-us`) | Chinese Traditional (`zh-Hant`|
+| English (`en-us`) | Czech (`cs)`|
+| English (`en-us`) | Danish (`da`)|
+| English (`en-us`) | Dutch (`nl`)|
+| English (`en-us`) | Estonian (`et`)|
+| English (`en-us`) | Fijian (`fj`)|
+| English (`en-us`) | Finnish (`fi`)|
+| English (`en-us`) | French (`fr`)|
+| English (`en-us`) | Greek (`el`)|
+| English (`en-us`) | Hindi (`hi`) |
+| English (`en-us`) | Hungarian (`hu`)|
+| English (`en-us`) | Icelandic (`is`)|
+| English (`en-us`) | Indonesian (`id`)|
+| English (`en-us`) | Inuktitut (`iu`)|
+| English (`en-us`) | Irish (`ga`)|
+| English (`en-us`) | Italian (`it`)|
+| English (`en-us`) | Japanese (`ja`)|
+| English (`en-us`) | Korean (`ko`)|
+| English (`en-us`) | Lithuanian (`lt`)|
+| English (`en-us`) | Norwegian (`nb`)|
+| English (`en-us`) | Polish (`pl`)|
+| English (`en-us`) | Romanian (`ro`)|
+| English (`en-us`) | Samoan (`sm`)|
+| English (`en-us`) | Slovak (`sk`)|
+| English (`en-us`) | Spanish (`es`)|
+| English (`en-us`) | Swedish (`sv`)|
+| English (`en-us`) | Tahitian (`ty`)|
+| English (`en-us`) | Thai (`th`)|
+| English (`en-us`) | Tongan (`to`)|
+| English (`en-us`) | Turkish (`tr`)|
+| English (`en-us`) | Ukrainian (`uk`) |
+| English (`en-us`) | Welsh (`cy`)|
+| Estonian (`et`) | English (`en-us`)|
+| Fijian (`fj`) | English (`en-us`)|
+| Finnish (`fi`) | English (`en-us`)|
+| German (`de`) | English (`en-us`)|
+| Greek (`el`) | English (`en-us`)|
+| Hungarian (`hu`) | English (`en-us`)|
+| Icelandic (`is`) | English (`en-us`)|
+| Indonesian (`id`) | English (`en-us`)
+| Inuktitut (`iu`) | English (`en-us`)|
+| Irish (`ga`) | English (`en-us`)|
+| Italian (`it`) | English (`en-us`)|
+| Japanese (`ja`) | English (`en-us`)|
+| Kazakh (`kk`) | English (`en-us`)|
+| Korean (`ko`) | English (`en-us`)|
+| Lithuanian (`lt`) | English (`en-us`)|
+| Malagasy (`mg`) | English (`en-us`)|
+| Maori (`mi`) | English (`en-us`)|
+| Norwegian (`nb`) | English (`en-us`)|
+| Persian (`fa`) | English (`en-us`)|
+| Polish (`pl`) | English (`en-us`)|
+| Romanian (`ro`) | English (`en-us`)|
+| Russian (`ru`) | English (`en-us`)|
+| Slovak (`sk`) | English (`en-us`)|
+| Spanish (`es`) | English (`en-us`)|
+| Swedish (`sv`) | English (`en-us`)|
+| Tahitian (`ty`) | English (`en-us`)|
+| Thai (`th`) | English (`en-us`)|
+| Tongan (`to`) | English (`en-us`)|
+| Turkish (`tr`) | English (`en-us`)|
+| Vietnamese (`vi`) | English (`en-us`)|
+| Welsh (`cy`) | English (`en-us`)|
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/sentence-alignment.md
Title: Sentence pairing and alignment - Custom Translator
+ Title: "Legacy: Sentence pairing and alignment - Custom Translator"
-description: During the training execution, sentences present in parallel documents are paired or aligned. Custom Translator learns translations one sentence at a time, by reading a sentence, the translation of this sentence. Then it aligns words and phrases in these two sentences to each other.
+description: During the training execution, sentences present in parallel documents are paired or aligned. Custom Translator learns translations one sentence at a time, by reading a sentence and translating it. Then it aligns words and phrases in these two sentences to each other.
and upload with an `.align` extension. The `.align` extension signals Custom
Translator that it should skip sentence alignment. For best results, try to make sure that you have one sentence per line in your
-files. Don't have newline characters within a sentence as this will cause poor
+ files. Don't have newline characters within a sentence, it will cause poor
alignments. ## Suggested minimum number of sentences
cognitive-services Training And Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/training-and-model.md
Title: What are trainings and models? - Custom Translator
+ Title: "Legacy: What are trainings and models? - Custom Translator"
-description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. When training a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
+description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
# What are trainings and models?
-A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. When training a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
+A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself.
If only training data is provided when queuing a training, Custom Translator wil
Documents included in training set are used by the Custom Translator as the basis for building your model. During training execution, sentences that are present in these documents are aligned (or paired). You can take liberties in composing your set of training documents. You can include documents that you believe are of tangential relevance in one model. Again exclude them in another to see the impact in [BLEU (Bilingual Evaluation Understudy) score](what-is-bleu-score.md). As long as you keep the tuning set and test set constant, feel free to experiment with the composition of the training set. This approach is an effective way to modify the quality of your translation system.
-You can run multiple trainings within a project and compare the [BLEU scores](what-is-bleu-score.md) across all training runs. When you are running multiple trainings for comparison, ensure same tuning/ test data is specified each time. Also make sure to also inspect the results manually in the ["Testing"](how-to-view-system-test-results.md) tab.
+You can run multiple trainings within a project and compare the [BLEU scores](what-is-bleu-score.md) across all training runs. When you're running multiple trainings for comparison, ensure same tuning/ test data is specified each time. Also make sure to also inspect the results manually in the ["Testing"](how-to-view-system-test-results.md) tab.
## Tuning document type for Custom Translator Parallel documents included in this set are used by the Custom Translator to tune the translation system for optimal results.
-The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You do not need more than 2500 sentences in your tuning data. For optimal translation quality, it is recommended to select the tuning set manually by choosing the most representative selection of sentences.
+The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You don't need more than 2500 sentences in your tuning data. For optimal translation quality, it's recommended to select the tuning set manually by choosing the most representative selection of sentences.
-When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words will produce the best results, because these sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
+When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words will produce the best results. These sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
A good description of the type of sentences to use in the tuning set is prose: actual fluent sentences. Not table cells, not poems, not lists of things, not only punctuation, or numbers in a sentence - regular language.
-If you manually select your tuning data, it should not have any of the same sentences as your training and testing data. The tuning data has a significant impact on the quality of the translations - choose the sentences carefully.
+If you manually select your tuning data, it shouldn't have any of the same sentences as your training and testing data. The tuning data has a significant impact on the quality of the translations - choose the sentences carefully.
-If you are not sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it will use a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
+If you aren't sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it will use a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
## Testing dataset for Custom Translator
Parallel documents included in the testing set are used to compute the BLEU (Bil
The BLEU score is a measurement of the delta between the automatic translation and the reference translation. Its value ranges from 0 to 100. A score of 0 indicates that not a single word of the reference appears in the translation. A score of 100 indicates that the automatic translation exactly matches the reference: the same word is in the exact same position. The score you receive is the BLEU score average for all sentences of the testing data.
-The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system. It is used exclusively to generate the BLEU score for you.
+The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system. It's used exclusively to generate the BLEU score for you.
You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
cognitive-services Unsupported Language Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/unsupported-language-deployments.md
Title: Unsupported language deployments - Custom Translator
+ Title: "Legacy: Unsupported language deployments - Custom Translator"
description: This article shows you how to deploy unsupported language pairs in Azure Cognitive Services Custom Translator.
<!--Custom Translator provides the highest-quality translations possible using the latest techniques in neural machine learning. While Microsoft intends to make neural training available in all languages, there are some limitations that prevent us from being able to offer neural machine translation in all language pairs.-->
-With the upcoming retirement of the Microsoft Translator Hub, Microsoft will be undeploying all models currently deployed through the Hub. Many of you have models deployed in the Hub whose language pairs are not supported in Custom Translator. We do not want users in this situation to have no recourse for translating their content.
+With the upcoming retirement of the Microsoft Translator Hub, Microsoft will be undeploying all models currently deployed through the Hub. Many of you have models deployed in the Hub whose language pairs aren't supported in Custom Translator. We don't want users in this situation to have no recourse for translating their content.
We now have a process that allows you to deploy your unsupported models through the Custom Translator. This process enables you to continue to translate content using the latest V3 API. These models will be hosted until you choose to undeploy them or the language pair becomes available in Custom Translator. This article explains the process to deploy models with unsupported language pairs.
We now have a process that allows you to deploy your unsupported models through
In order for your models to be candidates for deployment, they must meet the following criteria: * The project containing the model must have been migrated from the Hub to the Custom Translator using the Migration Tool. * The model must be in the deployed state when the migration happens.
-* The language pair of the model must be an unsupported language pair in Custom Translator. Language pairs in which a language is supported to or from English, but the pair itself does not include English, are candidates for unsupported language deployments. For example, a Hub model for a French to German language pair is considered an unsupported language pair even though French to English and English to German are supported language pair.
+* The language pair of the model must be an unsupported language pair in Custom Translator. Language pairs in which a language is supported to or from English, but the pair itself doesn't include English, are candidates for unsupported language deployments. For example, a Hub model for a French to German language pair is considered an unsupported language pair even though French to English and English to German are supported language pair.
## Process
-Once you have migrated models from the Hub that are candidates for deployment, you can find them by going to the **Settings** page for your workspace and scrolling to the end of the page where you will see an **Unsupported Translator Hub Trainings** section. This section only appears if you have projects that meet the prerequisites mentioned above.
+Once you have migrated models from the Hub that are candidates for deployment, you can find them by going to the **Settings** page for your workspace and scrolling to the end of the page where you'll see an **Unsupported Translator Hub Trainings** section. This section only appears if you have projects that meet the prerequisites mentioned above.
![Screenshot that highlights the Unsupported Translator Hub Trainings section.](media/unsupported-language-deployments/unsupported-translator-hub-trainings.jpg)
Once submitted, the model will no longer be available on the **Unrequested train
## What's next?
-The models you selected for deployment are saved once the Hub is decommissioned and all models are undeployed. You have until May 24 to submit requests for deployment of unsupported models. We will deploy these models on June 15 at which point they will be accessible through the Translator V3 API. In addition, they will be available through the V2 API until July 1.
+The models you selected for deployment are saved once the Hub is decommissioned and all models are undeployed. You've until May 24 to submit requests for deployment of unsupported models. We'll deploy these models on June 15 at which point they'll be accessible through the Translator V3 API. In addition, they'll be available through the V2 API until July 1.
For further information on important dates in the deprecation of the Hub check [here](https://www.microsoft.com/translator/business/hub/). Once deployed, normal hosting charges will apply. See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) for details.
-Unlike standard Custom Translator models, Hub models will only be available in a single region, so multi-region hosting charges will not apply. Once deployed, you will be able to undeploy your Hub model at any time through the migrated Custom Translator project.
+Unlike standard Custom Translator models, Hub models will only be available in a single region, so multi-region hosting charges won't apply. Once deployed, you'll be able to undeploy your Hub model at any time through the migrated Custom Translator project.
## Next steps
cognitive-services What Are Parallel Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-are-parallel-documents.md
Title: What are parallel documents? - Custom Translator
+ Title: "Legacy: What are parallel documents? - Custom Translator"
description: Parallel documents are pairs of documents where one is the translation of the other. One document in the pair contains sentences in the source language and the other document contains these sentences translated into the target language.
system in either direction.
## Requirements
-You will need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
+You'll need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
-Microsoft requires that documents uploaded to the Custom Translator do not violate a third party's copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal does not alter the ownership of the intellectual property in the document itself.
+Microsoft requires that documents uploaded to the Custom Translator don't violate a third party's copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal doesn't alter the ownership of the intellectual property in the document itself.
## Use of parallel documents
Parallel documents are used by the system:
phrases. A word may not always translate to the exact same word in the other language.
-As a best practice, make sure that there is a 1:1 sentence correspondence between
+As a best practice, make sure that there's a 1:1 sentence correspondence between
the source and target language versions of the documents. If your project is domain (category) specific, your documents should be
can do during translation.
Documents uploaded are private to each workspace and can be used in as many projects or trainings as you like. Sentences extracted from your documents are stored separately in your repository as plain Unicode text files and are
-available for you delete. Do not use the Custom Translator as a document
-repository, you will not be able to download the documents you uploaded in the
+available for you to delete. Don't use the Custom Translator as a document
+repository, you won't be able to download the documents you uploaded in the
format you uploaded them. ## Next steps
cognitive-services What Is Bleu Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-bleu-score.md
Title: What is a BLEU score? - Custom Translator
+ Title: "Legacy: What is a BLEU score? - Custom Translator"
description: BLEU is a measurement of the differences between machine translation and human-created reference translations of the same source sentence.
The BLEU algorithm compares consecutive phrases of the automatic translation
with the consecutive phrases it finds in the reference translation, and counts the number of matches, in a weighted fashion. These matches are position independent. A higher match degree indicates a higher degree of similarity with
-the reference translation, and higher score. Intelligibility and grammatical correctness are not taken into account.
+the reference translation, and higher score. Intelligibility and grammatical correctness aren't taken into account.
## How BLEU works?
-BLEUΓÇÖs strength is that it correlates well with human judgment by averaging out
+The BLEU score's strength is that it correlates well with human judgment. BLEU averages out
individual sentence judgment errors over a test corpus, rather than attempting to devise the exact human judgment for every sentence. A more extensive discussion of BLEU scores is [here](https://youtu.be/-UqDljMymMg).
-BLEU results depend strongly on the breadth of your domain, the consistency of
-the test data with the training and tuning data, and how much data you have
-available to train. If your models have been trained on a narrow domain, and
+BLEU results depend strongly on the breadth of your domain; consistency of
+test, training and tuning data; and how much data you have
+available for training. If your models have been trained on a narrow domain, and
your training data is consistent with your test data, you can expect a high BLEU score.
cognitive-services What Is Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/what-is-dictionary.md
Title: What is a dictionary? - Custom Translator
+ Title: "Legacy: What is a dictionary? - Custom Translator"
description: How to create an aligned document that specifies a list of phrases or sentences (and their translations) that you always want Microsoft Translator to translate the same way. Dictionaries are sometimes also called glossaries or term bases.
# What is a dictionary?
-A dictionary is an aligned pair of documents that specifies a list of phrases or sentences and their corresponding translations. Use a dictionary in your training, when you want Microsoft Translator to always translate any instances of the source phrase or sentence, using the translation you've provided in the dictionary. Dictionaries are sometimes called glossaries or term bases. You can think of the dictionary as a brute force "copy and replace" for all the terms you list. Furthermore, Microsoft Custom Translator service builds and makes use of its own general purpose dictionaries to improve the quality of its translation. However, a customer provided dictionary takes precedent and will be searched first to look up words or sentences.
+A dictionary is an aligned pair of documents that specifies a list of phrases or sentences and their corresponding translations. Use a dictionary in your training, when you want Translator to translate any instances of the source phrase or sentence, using the translation you've provided in the dictionary. Dictionaries are sometimes called glossaries or term bases. You can think of the dictionary as a brute force "copy and replace" for all the terms you list. Furthermore, Microsoft Custom Translator service builds and makes use of its own general purpose dictionaries to improve the quality of its translation. However, a customer provided dictionary takes precedent and will be searched first to look up words or sentences.
Dictionaries only work for projects in language pairs that have a fully supported Microsoft general neural network model behind them. [View the complete list of languages](../language-support.md). ## Phrase dictionary
-A phrase dictionary is case-sensitive. It is an exact find-and-replace operation. When you include a phrase dictionary in training your model, any word or phrase listed is translated in the way specified. The rest of the sentence is translated as usual. You can use a phrase dictionary to specify phrases that shouldn't be translated by providing the same untranslated phrase in the source and target files.
+A phrase dictionary is case-sensitive. It's an exact find-and-replace operation. When you include a phrase dictionary in training your model, any word or phrase listed is translated in the way specified. The rest of the sentence is translated as usual. You can use a phrase dictionary to specify phrases that shouldn't be translated by providing the same untranslated phrase in the source and target files.
## Sentence dictionary
-A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If the source dictionary entry ends with punctuation, it is ignored during the match. If only a portion of the sentence matches, the entry won't match. When a match is detected, the target entry of the sentence dictionary will be returned.
+A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If the source dictionary entry ends with punctuation, it's ignored during the match. If only a portion of the sentence matches, the entry won't match. When a match is detected, the target entry of the sentence dictionary will be returned.
## Dictionary-only trainings
-You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there is no minimum number of training sentences required. Your model will typically complete training much faster than a standard training. The resulting models will use the Microsoft baseline models for translation with the addition of the dictionaries you have added. You won't get a test report.
+You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there's no minimum number of training sentences required. Your model will typically complete training much faster than a standard training. The resulting models will use the Microsoft baseline models for translation with the addition of the dictionaries you've added. You won't get a test report.
>[!Note] >Custom Translator doesn't sentence align dictionary files, so it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned. ## Recommendations -- Dictionaries are not a substitute for training a model using training data. We recommended letting the system learn from your training data for better results. However, when sentences or compound nouns must be rendered as-is, use a dictionary.
+- Dictionaries aren't a substitute for training a model using training data. We recommended letting the system learn from your training data for better results. However, when sentences or compound nouns must be rendered as-is, use a dictionary.
- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context within that sentence is lost or limited for translating the rest of the sentence. The result is that while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence will often suffer. - The phrase dictionary works well for compound nouns like product names ("Microsoft SQL Server"), proper names ("City of Hamburg"), or features of the product ("pivot table"). It doesn't work equally well for verbs or adjectives because those words are typically highly inflected in the source or in the target language. Best practice is to avoid phrase dictionary entries for anything but compound nouns. - When using a phrase dictionary, capitalization and punctuation are important. Dictionary entries will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation as specified in the source dictionary file. Also the translations will reflect the capitalization and punctuation provided in the target dictionary file. For example, if you trained an English to Spanish system that uses a phrase dictionary that specifies "US" in the source file, and "EE.UU." in the target file. When you request translation of a sentence that includes the word "us" (not capitalized), it will NOT return a match from the dictionary. However, if you request translation of a sentence that contains the word "US" (capitalized), it will match the dictionary and the translation will contain "EE.UU." The capitalization and punctuation in the translation may be different than specified in the dictionary target file, and may be different from the capitalization and punctuation in the source. It follows the rules of the target language. - When using a sentence dictionary, the end of sentence punctuation is ignored. For example, if your source dictionary contains "this sentence ends with punctuation!", then any translation requests containing "this sentence ends with punctuation" would match.-- If a word appears more than once in a dictionary file, the system will always use the last entry provided. Thus, your dictionary should not contain multiple translations of the same word.
+- If a word appears more than once in a dictionary file, the system will always use the last entry provided. Thus, your dictionary shouldn't contain multiple translations of the same word.
## Next steps
cognitive-services Workspace And Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/workspace-and-project.md
Title: What is a workspace and project? - Custom Translator
+ Title: "Legacy: What is a workspace and project? - Custom Translator"
description: This article will explain the differences between a workspace and a project as well as project categories and labels for the Custom Translator service.
A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is inside a specific workspace.
-Workspace is private to you and the people you invite into your workspace. Uninvited people do not have access to the content of your workspace. You can invite as many people as you like into your workspace and modify or remove their access anytime. You can also create a new workspace. By default a workspace will not contain any projects or documents that are within your other workspaces.
+Workspace is private to you and the people you invite into your workspace. Uninvited people don't have access to the content of your workspace. You can invite as many people as you like into your workspace and modify or remove their access anytime. You can also create a new workspace. By default a workspace won't contain any projects or documents that are within your other workspaces.
## What is a Custom Translator project?
that is used when querying the [V3 API](../reference/v3-0-translate.md?tabs=curl
The category identifies the domain ΓÇô the area of terminology and style you want to use ΓÇô for your project. Choose the category most relevant to your documents. In some cases, your choice of the category directly influences the behavior of the Custom Translator.
-We have two sets of baseline models. They are General and Technology. If the category **Technology** is selected, the Technology baseline models will be used. For any other category selection, the General baseline models are used. The Technology baseline model does well in technology domain, but it shows lower quality, if the sentences used for translation don't fall within the technology domain. We suggest customers to select category Technology only if sentences fall strictly within the technology domain.
+We have two sets of baseline models. They're General and Technology. If the category **Technology** is selected, the Technology baseline models will be used. For any other category selection, the General baseline models are used. The Technology baseline model does well in technology domain, but it shows lower quality, if the sentences used for translation don't fall within the technology domain. We suggest customers select category Technology only if sentences fall strictly within the technology domain.
In the same workspace, you may create projects for the same language pair in different categories. Custom Translator prevents creation of a duplicate project
specify the same category (Technology) for both and leave the project label
blank. The CategoryID for both projects would match, so I could query the API for both English and French translations without having to modify my CategoryID.
-If you are a language service provider and want to serve
+If you're a language service provider and want to serve
multiple customers with different models that retain the same category and language pair, then using a project label to differentiate between customers would be a wise decision.
communication-services Teams Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md
const locator = {
const call = callAgent.join(locator); ```
-Join by using meeting id (this is currently in limited preview):
-
-```js
-const locator = { meetingId: '<MEETING_ID>'}
-const call = callAgent.join(locator);
-```
- ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Previously updated : 03/28/2022 Last updated : 05/13/2022
The following example ARM template deploys a Container Apps environment.
}, "log_analytics_shared_key": { "type": "SecureString"
+ },
+ "storage_account_name": {
+ "type": "String"
+ },
+ "storage_account_key": {
+ "type": "SecureString"
+ },
+ "storage_share_name": {
+ "type": "String"
} }, "variables": {}, "resources": [ { "type": "Microsoft.App/managedEnvironments",
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"name": "[parameters('environment_name')]", "location": "[parameters('location')]", "properties": {
The following example ARM template deploys a Container Apps environment.
"sharedKey": "[parameters('log_analytics_shared_key')]" } }
- }
+ },
+ "resources": [
+ {
+ "type": "storages",
+ "name": "myazurefiles",
+ "apiVersion": "2022-03-01",
+ "dependsOn": [
+ "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]"
+ ],
+ "properties": {
+ "azureFile": {
+ "accountName": "[parameters('storage_account_name')]",
+ "accountKey": "[parameters('storage_account_key')]",
+ "shareName": "[parameters('storage_share_name')]",
+ "accessMode": "ReadWrite"
+ }
+ }
+ }
+ ]
} ] }
The following example ARM template deploys a container app.
}, "registry_password": { "type": "SecureString"
+ },
+ "storage_share_name": {
+ "type": "String"
} }, "variables": {}, "resources": [ {
- "apiVersion": "2022-01-01-preview",
+ "apiVersion": "2022-03-01",
"type": "Microsoft.App/containerApps", "name": "[parameters('containerappName')]", "location": "[parameters('location')]",
The following example ARM template deploys a container app.
"memory": "1Gi" }, "probes":[
- {
- "type":"liveness",
- "httpGet":{
- "path":"/health",
- "port":8080,
- "httpHeaders":[
- {
- "name":"Custom-Header",
- "value":"liveness probe"
- }]
- },
- "initialDelaySeconds":7,
- "periodSeconds":3
- },
- {
- "type":"readiness",
- "tcpSocket":
- {
- "port": 8081
- },
- "initialDelaySeconds": 10,
- "periodSeconds": 3
- },
- {
- "type": "startup",
- "httpGet": {
- "path": "/startup",
- "port": 8080,
- "httpHeaders": [
- {
- "name": "Custom-Header",
- "value": "startup probe"
- }]
- },
- "initialDelaySeconds": 3,
- "periodSeconds": 3
- }]
+ {
+ "type":"liveness",
+ "httpGet":{
+ "path":"/health",
+ "port":8080,
+ "httpHeaders":[
+ {
+ "name":"Custom-Header",
+ "value":"liveness probe"
+ }]
+ },
+ "initialDelaySeconds":7,
+ "periodSeconds":3
+ },
+ {
+ "type":"readiness",
+ "tcpSocket":
+ {
+ "port": 8081
+ },
+ "initialDelaySeconds": 10,
+ "periodSeconds": 3
+ },
+ {
+ "type": "startup",
+ "httpGet": {
+ "path": "/startup",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "startup probe"
+ }]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
+ }
+ ],
+ "volumeMounts": [
+ {
+ "mountPath": "/myempty",
+ "volumeName": "myempty"
+ },
+ {
+ "mountPath": "/myfiles",
+ "volumeName": "azure-files-volume"
+ }
+ ]
} ], "scale": { "minReplicas": 1, "maxReplicas": 3
- }
+ },
+ "volumes": [
+ {
+ "name": "myempty",
+ "storageType": "EmptyDir"
+ },
+ {
+ "name": "azure-files-volume",
+ "storageType": "AzureFile",
+ "storageName": "myazurefiles"
+ }
+ ]
} } }
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
+
+ Title: 'Tutorial: Communication between microservices in Azure Container Apps'
+description: Learn how to communicate between microservices deployed in Azure Container Apps
++++ Last updated : 05/13/2022+
+zone_pivot_groups: container-apps-image-build-type
++
+# Tutorial: Communication between microservices in Azure Container Apps Preview
+
+Azure Container Apps exposes each container app through a domain name if [ingress](ingress.md) is enabled. Ingress endpoints for container apps within an external environment can be either publicly accessible or only available to other container apps in the same [environment](environment.md).
+
+Once you know the fully qualified domain name for a given container app, you can make direct calls to the service from other container apps within the shared environment.
+
+In this tutorial, you deploy a second container app that makes a direct service call to the API deployed in the [Deploy your code to Azure Container Apps](./quickstart-code-to-cloud.md) quickstart.
+
+The following screenshot shows the UI microservice deploys to container apps at the end of this article.
++
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Deploy a front end application to Azure Container Apps
+> * Link the front end app to the API endpoint deployed in the previous quickstart
+> * Verify the frontend app can communicate with the back end API
+
+## Prerequisites
+
+In the [code to cloud quickstart](./quickstart-code-to-cloud.md), a back end web API is deployed to return a list of music albums. If you haven't deployed the album API microservice, return to [Quickstart: Deploy your code to Azure Container Apps](quickstart-code-to-cloud.md) to continue.
+
+## Setup
+
+If you're still authenticated to Azure and still have the environment variables defined from the quickstart, you can skip the following steps and go directly to the [Prepare the GitHub repository](#prepare-the-github-repository) section.
++
+Sign in to the Azure CLI.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az login
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az login
+```
++++
+# [Bash](#tab/bash)
+
+```azurecli
+az acr login --name $ACR_NAME
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr login --name $ACR_NAME
+```
++++
+## Prepare the GitHub repository
+
+1. In a new browser tab, navigate to the [repository for the UI application](https://github.com/azure-samples/containerapps-albumui) and select the **Fork** button at the top of the page to fork the repo to your account.
+
+ Follow the prompts from GitHub to fork the repository and return here once the operation is complete.
+
+1. Navigate to the parent of the *code-to-cloud* folder. If you're still in the *code-to-cloud/src* directory, you can use the below command to return to the parent folder.
+
+ ```console
+ cd ../..
+ ```
+
+1. Use the following git command to clone your forked repo into the *code-to-cloud-ui* folder:
+
+ ```git
+ git clone https://github.com/$GITHUB_USERNAME/containerapps-albumui.git code-to-cloud-ui
+ ```
+
+ > [!NOTE]
+ > If the `clone` command fails, check that you have successfully forked the repository.
+
+1. Next, change the directory into the *src* folder of the cloned repo.
+
+ ```console
+ cd code-to-cloud-ui/src
+ ```
+
+## Build the front end application
++
+# [Bash](#tab/bash)
+
+```azurecli
+az acr build --registry $ACR_NAME --image albumapp-ui .
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr build --registry $ACR_NAME --image albumapp-ui .
+```
+++
+Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` operation.
+++
+1. The following command builds a container image for the album UI and tags it with the fully qualified name of the ACR log in server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ docker build --tag $ACR_NAME.azurecr.io/albumapp-ui .
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```powershell
+ docker build --tag $ACR_NAME.azurecr.io/albumapp-ui .
+ ```
+
+
+
+## Push the image to your ACR registry
+
+1. First, sign in to your Azure Container Registry.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ az acr login --name $ACR_NAME
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```powershell
+ az acr login --name $ACR_NAME
+ ```
+
+
+
+1. Now, push the image to your registry.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ docker push $ACR_NAME.azurecr.io/albumapp-ui .
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```powershell
+ docker push $ACR_NAME.azurecr.io/albumapp-ui .
+ ```
+
+
++
+## Communicate between container apps
+
+In the previous quickstart, the album API was deployed by creating a container app and enabling external ingress. Setting the container app's ingress to *external* made its HTTP endpoint URL publicly available.
+
+Now you can configure the front end application to call the API endpoint by going through the following steps:
+
+* Query the API application for its fully qualified domain name (FQDN).
+* Pass the API FQDN to `az containerapp create` as an environment variable so the UI app can set the base URL for the album API call within the code.
+
+The [UI application](https://github.com/Azure-Samples/containerapps-albumui) uses the endpoint provided to invoke the album API. The following code is an excerpt from the code used in the *routes > index.js* file.
+
+```javascript
+const api = axios.create({
+ baseURL: process.env.API_BASE_URL,
+ params: {},
+ timeout: process.env.TIMEOUT || 5000,
+});
+```
+
+Notice how the `baseURL` property gets its value from the `API_BASE_URL` environment variable.
+
+Run the following command to query for the API endpoint address.
+
+# [Bash](#tab/bash)
+
+```azurecli
+API_BASE_URL=$(az containerapp show --resource-group $RESOURCE_GROUP --name $API_NAME --query properties.configuration.ingress.fqdn -o tsv)
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$API_BASE_URL=$(az containerapp show --resource-group $RESOURCE_GROUP --name $API_NAME --query properties.configuration.ingress.fqdn -o tsv)
+```
+++
+Now that you have set the `API_BASE_URL` variable with the FQDN of the album API, you can provide it as an environment variable to the frontend container app.
+
+## Deploy front end application
+
+Create and deploy your container app with the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name $FRONTEND_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $ACR_NAME.azurecr.io/albumapp-ui \
+ --target-port 3000 \
+ --env-vars API_BASE_URL=https://$API_BASE_URL \
+ --ingress 'external' \
+ --registry-server $ACR_NAME.azurecr.io \
+ --query configuration.ingress.fqdn
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp create `
+ --name $FRONTEND_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --environment $ENVIRONMENT `
+ --image $ACR_NAME.azurecr.io/albumapp-ui `
+ --env-vars API_BASE_URL=https://$API_BASE_URL `
+ --target-port 3000 `
+ --ingress 'external' `
+ --registry-server "$ACR_NAME.azurecr.io" `
+ --query configuration.ingress.fqdn
+```
+++
+By adding the argument `--env-vars "API_BASE_URL=https://$API_ENDPOINT"` to `az containerapp create`, you define an environment variable for your front end application. With this syntax, the environment variable named `API_BASE_URL` is set to the API's FQDN.
+
+## View website
+
+The `az containerapp create` CLI command returns the fully qualified domain name (FQDN) of your album UI container app. Open this location in a browser to navigate to the web application resembling the following screenshot.
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az group delete --name $RESOURCE_GROUP
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Environments in Azure Container Apps](environment.md)
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
+
+ Title: Custom domain names and certificates in Azure Container Apps
+description: Learn to manage custom domain names and certificates in Azure Container Apps
++++ Last updated : 05/15/2022+++
+# Custom domain names and certificates in Azure Container Apps
+
+Azure Container Apps allows you to bind one or more custom domains to a container app.
+
+- Every domain name must be associated with a domain certificate.
+- Certificates are applied to the container app environment and are bound to individual container apps. You must have role-based access to the environment to add certificates.
+- [SNI domain certificates](https://wikipedia.org/wiki/Server_Name_Indication) are required.
+
+## Add a custom domain and certificate
+
+> [!NOTE]
+> If you are using a new certificate, you must have an existing [SNI domain certificate](https://wikipedia.org/wiki/Server_Name_Indication) file available to upload to Azure.
+
+1. Navigate to your container app in the [Azure portal](https://portal.azure.com)
+
+1. Under the *Settings* section, select **Custom domains**.
+
+1. Select the **Add custom domain** button.
+
+1. In the *Add custom domain* window, enter the following values for the *Enter domain* tab:
+
+ | Setting | Value | Notes |
+ |--|--|--|
+ | Domain | Enter your domain name. | Make sure the value is just the domain without the protocol. For instance, `example.com`, or `www.example.com`. |
+ | Hostname record type | Verify the default value. | The value selected automatically is Azure's best guess based on the form of the domain name you entered. For an apex domain, the value should be an `A` record, for a subdomain the value should be `CNAME`. |
+
+1. Next, you need to add the DNS records shown on this window to your domain via your domain provider's website. Open a new browser window to add the DNS records and return here once you're finished.
+
+1. Once the required DNS records are created on your domain provider's account, select the **Validate** button.
+
+1. Once validation succeeds, select the **Next** button.
+
+1. On the *Bind certificate + add* tab, enter the following values:
+
+ | Setting | Value | Notes |
+ |--|--|--|
+ | Certificate | Select an existing certificate from the list, or select the **Create new** link. | If you create a new certificate, a window appears that allows you to select a certificate file from your local machine. Once you select a certificate file, you're prompted to add the certificate password. |
+
+ Once you select a certificate, the binding operation may take up to a minute to complete.
+
+Once the add operation is complete, you see your domain name in the list of custom domains.
+
+## Managing certificates
+
+You can manage certificates via the Container Apps environment or through an individual container app.
+
+### Environment
+
+The *Certificates* window of the Container Apps environment presents a table of all the certificates associated with the environment.
+
+You can manage your certificates through the following actions:
+
+| Action | Description |
+|--|--|
+| Add | Select the **Add certificate** link to add a new certificate. |
+| Delete | Select the trash can icon to remove a certificate. |
+| Renew | The *Health status* field of the table indicates that a certificate is expiring soon within 60 days of the expiration date. To renew a certificate, select the **Renew certificate** link to upload a new certificate. |
+
+### Container app
+
+The *Custom domains* window of the container app presents a list of custom domains associated with the container app.
+
+You can manage your certificates for an individual domain name by selecting the ellipsis (**...**) button, which opens the certificate binding window. From the following window, you can select a certificate to bind to the selected domain name.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication in Azure Container Apps](authentication.md)
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Now that you have a container app environment in Azure you can create a containe
9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
-10) Enter a value of 3000 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3000.
+10) Enter a value of 3500 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3500.
During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Last updated 5/10/2022
# Disaster recovery guidance for Azure Container Apps
-Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) to offer high-availability protection for your applications and data from data center failures.
+Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) where offered to provide high-availability protection for your applications and data from data center failures.
Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
+
+ Title: "Quickstart: Deploy your code to Azure Container Apps"
+description: Code to cloud deploying your application to Azure Container Apps
++++ Last updated : 05/11/2022+
+zone_pivot_groups: container-apps-image-build-type
++
+# Quickstart: Deploy your code to Azure Container Apps
+
+This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
+
+This quickstart is the first in a series of articles that walk you through how to use core capabilities within Azure Container Apps. The first step is to create a back end web API service that returns a static collection of music albums.
+
+The following screenshot shows the output from the album API deployed in this quickstart.
++
+## Prerequisites
+
+To complete this project, you'll need the following items:
++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+| Docker Desktop | Docker provides installers that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). <br><br>From your command prompt, type `docker` to ensure Docker is running. |
+++
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+++
+## Prepare the GitHub repository
+
+Navigate to the repository for your preferred language and fork the repository.
+
+# [C#](#tab/csharp)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
+```
+
+# [Go](#tab/go)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```
+
+# [JavaScript](#tab/javascript)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
+```
+
+# [Python](#tab/python)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
+```
+++
+Next, change the directory into the root of the cloned repo.
+
+```console
+cd code-to-cloud/src
+```
+
+## Create an Azure Resource Group
+
+Create a resource group to organize the services related to your container app deployment.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az group create `
+ --name $RESOURCE_GROUP `
+ --location "$LOCATION"
+```
+++
+## Create an Azure Container Registry
+
+Next, create an Azure Container Registry (ACR) instance in your resource group to store the album API container image once it's built.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr create \
+ --resource-group $RESOURCE_GROUP \
+ --name $ACR_NAME \
+ --sku Basic \
+ --admin-enabled true
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr create `
+ --resource-group $RESOURCE_GROUP `
+ --name $ACR_NAME `
+ --sku Basic `
+ --admin-enabled true
+```
++++
+## Build your application
+
+With [ACR tasks](/azure/container-registry/container-registry-tasks-overview), you can build and push the docker image for the album API without installing Docker locally.
+
+### Build the container with ACR
+
+Run the following command to initiate the image build and push process using ACR. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr build --registry $ACR_NAME --image $API_NAME .
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr build --registry $ACR_NAME --image $API_NAME .
+```
+++
+Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` and `docker push` operations.
+++
+## Build your application
+
+The following steps, demonstrate how to build your container image locally using Docker and push the image to the new container registry.
+
+### Build the container with Docker
+
+The following command builds a container image for the album API and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker build --tag $ACR_NAME.azurecr.io/$API_NAME .
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+docker build --tag "$ACR_NAME.azurecr.io/$API_NAME" .
+```
+++
+### Push the image to your container registry
+
+First, sign in to your Azure Container Registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr login --name $ACR_NAME
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az acr login --name $ACR_NAME
+```
+++
+Now, push the image to your registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker push $ACR_NAME.azurecr.io/$API_NAME
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+docker push "$ACR_NAME.azurecr.io/$API_NAME"
+```
++++
+## Create a Container Apps environment
+
+The Azure Container Apps environment acts as a secure boundary around a group of container apps.
+
+Create the Container Apps environment using the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp env create `
+ --name $ENVIRONMENT `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION
+```
+++
+## Deploy your image to a container app
+
+Now that you have an environment created, you can create and deploy your container app with the `az containerapp create` command.
+
+Create and deploy your container app with the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $ACR_NAME.azurecr.io/$API_NAME \
+ --target-port 3500 \
+ --ingress 'external' \
+ --registry-server $ACR_NAME.azurecr.io \
+ --query configuration.ingress.fqdn
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp create `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --environment $ENVIRONMENT `
+ --image "$ACR_NAME.azurecr.io/$API_NAME" `
+ --target-port 3500 `
+ --ingress 'external' `
+ --registry-server "$ACR_NAME.azurecr.io" `
+ --query configuration.ingress.fqdn
+```
+++
+* By setting `--ingress` to `external`, your container app will be accessible from the public internet.
+
+* The `target-port` is set to `3500` to match the port the that the container is listing to for requests.
+
+* Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. By adding a query, this command filters the response down to just the FQDN.
+
+## Verify deployment
+
+The `az containerapp create` command returns the fully qualified domain name (FQDN) for the container app. Copy the FQDN to a web browser.
+
+From your web browser, navigate to the `/albums` endpoint of the FQDN.
++
+## Clean up resources
+
+If you're not going to continue on to the [Communication between microservices](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart. Run the following command to delete the resource group along with all the resources created in this quickstart.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az group delete --name $RESOURCE_GROUP
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/samples.md
Refer to the following samples to learn how to use Azure Container Apps in diffe
| Name | Description | |--|--|
-| [Deploy an Orleans Cluster to Container Apps](https://github.com/Azure-Samples/Orleans-Cluster-on-Azure-Container-Apps) | An end-to-end sample and tutorial for getting a Microsoft Orleans cluster running on Azure Container Apps. Worker microservices rapidly transmit data to a back-end Orleans cluster for monitoring and storage, emulating thousands of physical devices in the field. |
-| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps ) | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. |
-| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps ) | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. |
+| [A/B Testing your ASP.NET Core apps using Azure Container Apps](https://github.com/Azure-Samples/dotNET-Frontend-AB-Testing-on-Azure-Container-Apps)<br /> | Shows how to use Azure App Configuration, ASP.NET Core Feature Flags, and Azure Container Apps revisions together to gradually release features or perform A/B tests. <br /> |
+| [gRPC with ASP.NET Core on Azure Container Apps](https://github.com/Azure-Samples/dotNET-Workers-with-gRPC-messaging-on-Azure-Container-Apps) | This repository contains a simple scenario built to demonstrate how ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps that uses gRPC request/response transmission from Worker microservices. The gRPC service simultaneously streams sensor data to a Blazor server frontend, so you can watch the data be charted in real-time. <br /> |
+| [Deploy an Orleans Cluster to Container Apps](https://github.com/Azure-Samples/Orleans-Cluster-on-Azure-Container-Apps) | An end-to-end sample and tutorial for getting a Microsoft Orleans cluster running on Azure Container Apps. Worker microservices rapidly transmit data to a back-end Orleans cluster for monitoring and storage, emulating thousands of physical devices in the field.<br /> |
+| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps )<br /> | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. |
+| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps )<br /> | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. |
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
+
+ Title: Use storage mounts in Azure Container Apps
+description: Learn to use temporary and permanent storage mounts in Azure Container Apps
++++ Last updated : 05/13/2022+
+zone_pivot_groups: container-apps-config-types
++
+# Use storage mounts in Azure Container Apps
+
+A container app has access to different types of storage. A single app can take advantage of more than one type of storage if necessary.
+
+| Storage type | Description | Usage examples |
+|--|--|--|
+| [Container file system](#container-file-system) | Temporary storage scoped to the environment | Writing a local app cache. |
+| [Temporary storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. |
+| [Azure Files](#azure-files) | Permanent storage | Writing files to a file share to make data accessible by other systems. |
+
+## Container file system
+
+A container can write to its own file system.
+
+Container file system storage has the following characteristics:
+
+* The storage is temporary and disappears when the container is shut down or restarted.
+* Files written to this storage are only visible to processes running in the current container.
+* There are no capacity guarantees. The available storage depends on the amount of disk space available in the container.
+
+## Temporary storage
+
+You can mount an ephemeral volume that is equivalent to [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in Kubernetes. Temporary storage is scoped to a single replica.
+
+Temporary storage has the following characteristics:
+
+* Files are persisted for the lifetime of the replica.
+ * If a container in a replica restarts, the files in the volume remain.
+* Any containers in the replica can mount the same volume.
+* A container can mount multiple temporary volumes.
+* There are no capacity guarantees. The available storage depends on the amount of disk space available in the replica.
+
+To configure temporary storage, first define an `EmptyDir` volume in the revision. Then define a volume mount in one or more containers in the revision.
+
+### Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
+| Azure Container Apps environment | [Create a container apps environment](environment.md). |
+
+### Configuration
++
+When using temporary storage, you must use the Azure CLI with a YAML definition to create or update your container app.
+
+1. To update an existing container app to use temporary storage, export your app's specification to a YAML file named *app.yaml*.
+
+ ```azure-cli
+ az containerapp show -n <APP_NAME> -g <RESOURCE_GROUP_NAME> -o yaml > app.yaml
+ ```
+
+1. Make the following changes to your container app specification.
+
+ - Add a `volumes` array to the `template` section of your container app definition and define a volume.
+ - The `name` is an identifier for the volume.
+ - Use `EmptyDir` as the `storageType`.
+ - For each container in the template that you want to mount temporary storage, add a `volumeMounts` array to the container definition and define a volume mount.
+ - The `volumeName` is the name defined in the `volumes` array.
+ - The `mountPath` is the path in the container to mount the volume.
+
+ ```yaml
+ properties:
+ managedEnvironmentId: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.App/managedEnvironments/<ENVIRONMENT_NAME>
+ configuration:
+ activeRevisionsMode: Single
+ template:
+ containers:
+ - image: <IMAGE_NAME>
+ name: my-container
+ volumeMounts:
+ - mountPath: /myempty
+ volumeName: myempty
+ volumes:
+ - name: myempty
+ storageType: EmptyDir
+ ```
+
+1. Update your container app using the YAML file.
+
+ ```azure-cli
+ az containerapp update --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> \
+ --yaml app.yaml
+ ```
+++
+To create a temporary volume and mount it in a container, make the following changes to the container apps resource in an ARM template:
+
+- Add a `volumes` array to the `template` section of your container app definition and define a volume.
+ - The `name` is an identifier for the volume.
+ - Use `EmptyDir` as the `storageType`.
+- For each container in the template that you want to mount temporary storage, add a `volumeMounts` array to the container definition and define a volume mount.
+ - The `volumeName` is the name defined in the `volumes` array.
+ - The `mountPath` is the path in the container to mount the volume.
+
+Example ARM template snippet:
+
+```json
+{
+ "apiVersion": "2022-03-01",
+ "type": "Microsoft.App/containerApps",
+ "name": "[parameters('containerappName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+
+ ...
+
+ "template": {
+ "revisionSuffix": "myrevision",
+ "containers": [
+ {
+ "name": "main",
+ "image": "[parameters('container_image')]",
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1Gi"
+ },
+ "volumeMounts": [
+ {
+ "mountPath": "/myempty",
+ "volumeName": "myempty"
+ }
+ ]
+ }
+ ],
+ "scale": {
+ "minReplicas": 1,
+ "maxReplicas": 3
+ },
+ "volumes": [
+ {
+ "name": "myempty",
+ "storageType": "EmptyDir"
+ }
+ ]
+ }
+ }
+}
+```
+
+See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example.
++
+## Azure Files
+
+You can mount a file share from [Azure Files](/azure/storage/files/) as a volume inside a container.
+
+Azure Files storage has the following characteristics:
+
+* Files written under the mount location are persisted to the file share.
+* Files in the share are available via the mount location.
+* Multiple containers can mount the same file share, including ones that are in another replica, revision, or container app.
+* All containers that mount the share can access files written by any other container or method.
+* More than one Azure Files volume can be mounted in a single container.
+
+To enable Azure Files storage in your container, you need to set up your container in the following ways:
+
+* Create a storage definition of type `AzureFile` in the Container Apps environment.
+* Define a storage volume in a revision.
+* Define a volume mount in one or more containers in the revision.
+
+#### Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
+| Azure Storage account | [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-cli#create-a-storage-account-1). |
+| Azure Container Apps environment | [Create a container apps environment](environment.md). |
+
+### Configuration
++
+When using Azure Files, you must use the Azure CLI with a YAML definition to create or update your container app.
+
+1. Add a storage definition of type `AzureFile` to your Container Apps environment.
+
+ ```azure-cli
+ az containerapp env storage set --name my-env --resource-group my-group \
+ --storage-name mystorage \
+ --azure-file-account-name <STORAGE_ACCOUNT_NAME> \
+ --azure-file-account-key <STORAGE_ACCOUNT_KEY> \
+ --azure-file-share-name <STORAGE_SHARE_NAME> \
+ --access-mode ReadWrite
+ ```
+
+ Replace `<STORAGE_ACCOUNT_NAME>` and `<STORAGE_ACCOUNT_KEY>` with the name and key of your storage account. Replace `<STORAGE_SHARE_NAME>` with the name of the file share in the storage account.
+
+ Valid values for `--access-mode` are `ReadWrite` and `ReadOnly`.
+
+1. To update an existing container app to mount a file share, export your app's specification to a YAML file named *app.yaml*.
+
+ ```azure-cli
+ az containerapp show -n <APP_NAME> -g <RESOURCE_GROUP_NAME> -o yaml > app.yaml
+ ```
+
+1. Make the following changes to your container app specification.
+
+ - Add a `volumes` array to the `template` section of your container app definition and define a volume.
+ - The `name` is an identifier for the volume.
+ - For `storageType`, use `AzureFile`.
+ - For `storageName`, use the name of the storage you defined in the environment.
+ - For each container in the template that you want to mount Azure Files storage, add a `volumeMounts` array to the container definition and define a volume mount.
+ - The `volumeName` is the name defined in the `volumes` array.
+ - The `mountPath` is the path in the container to mount the volume.
+
+ ```yaml
+ properties:
+ managedEnvironmentId: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.App/managedEnvironments/<ENVIRONMENT_NAME>
+ configuration:
+ template:
+ containers:
+ - image: <IMAGE_NAME>
+ name: my-container
+ volumeMounts:
+ - volumeName: azure-files-volume
+ mountPath: /my-files
+ volumes:
+ - name: azure-files-volume
+ storageType: AzureFile
+ storageName: mystorage
+ ```
+
+1. Update your container app using the YAML file.
+
+ ```azure-cli
+ az containerapp update --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> \
+ --yaml my-app.yaml
+ ```
+++
+The following ARM template snippets demonstrate how to add an Azure Files share to a Container Apps environment and use it in a container app.
+
+1. Add a `storages` child resource to the Container Apps environment.
+
+ ```json
+ {
+ "type": "Microsoft.App/managedEnvironments",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('environment_name')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "daprAIInstrumentationKey": "[parameters('dapr_ai_instrumentation_key')]",
+ "appLogsConfiguration": {
+ "destination": "log-analytics",
+ "logAnalyticsConfiguration": {
+ "customerId": "[parameters('log_analytics_customer_id')]",
+ "sharedKey": "[parameters('log_analytics_shared_key')]"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "storages",
+ "name": "myazurefiles",
+ "apiVersion": "2022-03-01",
+ "dependsOn": [
+ "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]"
+ ],
+ "properties": {
+ "azureFile": {
+ "accountName": "[parameters('storage_account_name')]",
+ "accountKey": "[parameters('storage_account_key')]",
+ "shareName": "[parameters('storage_share_name')]",
+ "accessMode": "ReadWrite"
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+1. Update the container app resource to add a volume and volume mount.
+
+ ```json
+ {
+ "apiVersion": "2022-03-01",
+ "type": "Microsoft.App/containerApps",
+ "name": "[parameters('containerappName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+
+ ...
+
+ "template": {
+ "revisionSuffix": "myrevision",
+ "containers": [
+ {
+ "name": "main",
+ "image": "[parameters('container_image')]",
+ "resources": {
+ "cpu": 0.5,
+ "memory": "1Gi"
+ },
+ "volumeMounts": [
+ {
+ "mountPath": "/myfiles",
+ "volumeName": "azure-files-volume"
+ }
+ ]
+ }
+ ],
+ "scale": {
+ "minReplicas": 1,
+ "maxReplicas": 3
+ },
+ "volumes": [
+ {
+ "name": "azure-files-volume",
+ "storageType": "AzureFile",
+ "storageName": "myazurefiles"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+ - Add a `volumes` array to the `template` section of your container app definition and define a volume.
+ - The `name` is an identifier for the volume.
+ - For `storageType`, use `AzureFile`.
+ - For `storageName`, use the name of the storage you defined in the environment.
+ - For each container in the template that you want to mount Azure Files storage, add a `volumeMounts` array to the container definition and define a volume mount.
+ - The `volumeName` is the name defined in the `volumes` array.
+ - The `mountPath` is the path in the container to mount the volume.
+
+See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example.
+
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/availability-zones.md
# Deploy an Azure Container Instances (ACI) container group in an availability zone (preview)
-An [availability zone][availability-zone-overview] is a physically separate zone in an Azure region. You can use availability zones to protect your containerized applications from an unlikely failure or loss of an entire data center. Azure Container Instances (ACI) supports zonal container group deployments, meaning the instance is pinned to a specific, self-selected availability zone. The availability zone is specified at the container group level. Containers within a container group cannot have unique availability zones. To change your container group's availability zone, you must delete the container group and create another container group with the new availability zone.
+An [availability zone][availability-zone-overview] is a physically separate zone in an Azure region. You can use availability zones to protect your containerized applications from an unlikely failure or loss of an entire data center. Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can learn more about these types of services and how they promote resiliency in the [Highly available services section of Azure services that support availability zones](/azure/availability-zones/az-region#highly-available-services).
+
+Azure Container Instances (ACI) supports *zonal* container group deployments, meaning the instance is pinned to a specific, self-selected availability zone. The availability zone is specified at the container group level. Containers within a container group can't have unique availability zones. To change your container group's availability zone, you must delete the container group and create another container group with the new availability zone.
> [!IMPORTANT] > This feature is currently in preview. Previews are made available to you on the condition that you agree to the supplemental terms of use. > [!IMPORTANT]
-> Zonal container group deployments are supported in most regions where ACI is available for Linux and Windows Sever 2019 container groups. For details, see [Regions and resource availability][container-regions].
+> Zonal container group deployments are supported in most regions where ACI is available for Linux and Windows Server 2019 container groups. For details, see [Regions and resource availability][container-regions].
> [!NOTE] > Examples in this article are formatted for the Bash shell. If you prefer another shell, adjust the line continuation characters accordingly.
An [availability zone][availability-zone-overview] is a physically separate zone
> [!IMPORTANT] > This feature is currently not available for Azure portal.
-* Container groups with GPU resources do not support availability zones at this time.
-* Virtual Network injected container groups do not support availability zones at this time.
-* Windows Sever 2016 container groups do not support availability zones at this time.
+* Container groups with GPU resources don't support availability zones at this time.
+* Virtual Network injected container groups don't support availability zones at this time.
+* Windows Server 2016 container groups don't support availability zones at this time.
### Version requirements
To verify the container group deployed successfully into an availability zone, v
az containershow --name acilinuxcontainergroup --resource-group myResourceGroup ```
+## Next steps
+
+Learn about building fault-tolerant applications using zonal container groups from the [Azure Architecture Center's guide on availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+ <!-- LINKS - Internal --> [az-container-create]: /cli/azure/container#az_container_create [container-regions]: container-instances-region-availability.md
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md
Previously updated : 08/26/2021 Last updated : 05/12/2022 ms.devlang: csharp, java, javascript
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](../find-request-unit-charge.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you're using a different API, see [SQL API](../find-request-unit-charge.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB API for MongoDB, you have multiple options for retrieving the RU charge.
The RU charge is exposed by a custom [database command](https://docs.mongodb.com
`db.runCommand({getLastRequestStatistics: 1})`
-## Use the MongoDB .NET driver
+## Use a MongoDB driver
+
+### [.NET driver](#tab/dotnet-driver)
When you use the [official MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/), you can execute commands by calling the `RunCommand` method on a `IMongoDatabase` object. This method requires an implementation of the `Command<>` abstract class:
double requestCharge = (double)stats["RequestCharge"];
For more information, see [Quickstart: Build a .NET web app by using an Azure Cosmos DB API for MongoDB](create-mongodb-dotnet.md).
-## Use the MongoDB Java driver
-
+### [Java driver](#tab/java-driver)
When you use the [official MongoDB Java driver](https://mongodb.github.io/mongo-java-driver/), you can execute commands by calling the `runCommand` method on a `MongoDatabase` object:
Double requestCharge = stats.getDouble("RequestCharge");
For more information, see [Quickstart: Build a web app by using the Azure Cosmos DB API for MongoDB and the Java SDK](create-mongodb-java.md).
-## Use the MongoDB Node.js driver
+### [Node.js driver](#tab/node-driver)
When you use the [official MongoDB Node.js driver](https://mongodb.github.io/node-mongodb-native/), you can execute commands by calling the `command` method on a `db` object:
db.command({ getLastRequestStatistics: 1 }, function(err, result) {
For more information, see [Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB](create-mongodb-nodejs.md).
+### [Python driver](#tab/python-driver)
+
+```python
+response = db.command('getLastRequestStatistics')
+requestCharge = response['RequestCharge']
+```
+++ ## Next steps To learn about optimizing your RU consumption, see these articles:
To learn about optimizing your RU consumption, see these articles:
* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md) * [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-time-to-live.md
Title: Configure and manage Time to Live in Azure Cosmos DB description: Learn how to configure and manage time to live on a container and an item in Azure Cosmos DB-+ Previously updated : 12/09/2021- Last updated : 05/12/2022++
In Azure Cosmos DB, you can choose to configure Time to Live (TTL) at the container level, or you can override it at an item level after setting for the container. You can configure TTL for a container by using Azure portal or the language-specific SDKs. Item level TTL overrides can be configured by using the SDKs.
-> This content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
+> This article's content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl).
-## Enable time to live on a container using Azure portal
+## Enable time to live on a container using the Azure portal
-Use the following steps to enable time to live on a container with no expiration. Enable this to allow TTL to be overridden at the item level. You can also set the TTL by entering a non-zero value for seconds.
+Use the following steps to enable time to live on a container with no expiration. Enabling TTL at the container level to allow the same value to be overridden at an individual item's level. You can also set the TTL by entering a non-zero value for seconds.
1. Sign in to the [Azure portal](https://portal.azure.com/).
Use the following steps to enable time to live on a container with no expiration
* Set it to **On (no default)** or * Turn **On** with a TTL value specified in seconds.
- * Click **Save** to save the changes.
+ * Select **Save** to save the changes.
:::image type="content" source="./media/how-to-time-to-live/how-to-time-to-live-portal.png" alt-text="Configure Time to live in Azure portal":::
-* When DefaultTimeToLive is null then your Time to Live is Off
-* When DefaultTimeToLive is -1 then your Time to Live setting is On (No default)
-* When DefaultTimeToLive has any other Int value (except 0) your Time to Live setting is On. The server will automatically delete items based on the configured value.
+* When DefaultTimeToLive is null, then your Time to Live is Off
+* When DefaultTimeToLive is -1 then, your Time to Live setting is On (No default)
+* When DefaultTimeToLive has any other Int value (except 0), then your Time to Live setting is On. The server will automatically delete items based on the configured value.
-## Enable time to live on a container using Azure CLI or PowerShell
+## Enable time to live on a container using Azure CLI or Azure PowerShell
To create or enable TTL on a container see, * [Create a container with TTL using Azure CLI](manage-with-cli.md#create-a-container-with-ttl) * [Create a container with TTL using PowerShell](manage-with-powershell.md#create-container-unique-key-ttl)
-## Enable time to live on a container using SDK
+## Enable time to live on a container using an SDK
-### <a id="dotnet-enable-noexpiry"></a> .NET SDK
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-# [.NET SDK V2](#tab/dotnetv2)
+```csharp
+Database database = client.GetDatabase("database");
-.NET SDK V2 (Microsoft.Azure.DocumentDB)
+ContainerProperties properties = new ()
+{
+ Id = "container",
+ PartitionKeyPath = "/customerId",
+ // Never expire by default
+ DefaultTimeToLive = -1
+};
-```csharp
// Create a new container with TTL enabled and without any expiration value
-DocumentCollection collectionDefinition = new DocumentCollection();
-collectionDefinition.Id = "myContainer";
-collectionDefinition.PartitionKey.Paths.Add("/myPartitionKey");
-collectionDefinition.DefaultTimeToLive = -1; //(never expire by default)
-
-DocumentCollection ttlEnabledCollection = await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- collectionDefinition);
+Container container = await database
+ .CreateContainerAsync(properties);
```
-# [.NET SDK V3](#tab/dotnetv3)
+### [Java SDK v4](#tab/javav4)
-.NET SDK V3 (Microsoft.Azure.Cosmos)
+```java
+CosmosDatabase database = client.getDatabase("database");
+
+CosmosContainerProperties properties = new CosmosContainerProperties(
+ "container",
+ "/customerId"
+);
+// Never expire by default
+properties.setDefaultTimeToLiveInSeconds(-1);
-```csharp
// Create a new container with TTL enabled and without any expiration value
-await client.GetDatabase("database").CreateContainerAsync(new ContainerProperties
-{
- Id = "container",
- PartitionKeyPath = "/myPartitionKey",
- DefaultTimeToLive = -1 //(never expire by default)
-});
+CosmosContainerResponse response = database
+ .createContainerIfNotExists(properties);
```-
-### <a id="java-enable-noexpiry"></a> Java SDK
+### [Node SDK](#tab/node-sdk)
-# [Java SDK V4](#tab/javav4)
+```javascript
+const database = await client.database("database");
-Java SDK V4 (Maven com.azure::azure-cosmos)
+const properties = {
+ id: "container",
+ partitionKey: "/customerId",
+ // Never expire by default
+ defaultTtl: -1
+};
-```java
-CosmosAsyncContainer container;
+const { container } = await database.containers
+ .createIfNotExists(properties);
-// Create a new container with TTL enabled and without any expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-containerProperties.setDefaultTimeToLiveInSeconds(-1);
-container = database.createContainerIfNotExists(containerProperties, 400).block().getContainer();
```
-# [Java SDK V3](#tab/javav3)
-
-Java SDK V3 (Maven com.microsoft.azure::azure-cosmos)
+### [Python SDK](#tab/python-sdk)
-```java
-CosmosContainer container;
+```python
+database = client.get_database_client('database')
-// Create a new container with TTL enabled and without any expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-containerProperties.defaultTimeToLive(-1);
-container = database.createContainerIfNotExists(containerProperties, 400).block().container();
+database.create_container(
+ id='container',
+ partition_key=PartitionKey(path='/customerId'),
+ # Never expire by default
+ default_ttl=-1
+)
```+
-## Set time to live on a container using SDK
+## Set time to live on a container using an SDK
To set the time to live on a container, you need to provide a non-zero positive number that indicates the time period in seconds. Based on the configured TTL value, all items in the container after the last modified timestamp of the item `_ts` are deleted.
-### <a id="dotnet-enable-withexpiry"></a> .NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
-
-.NET SDK V2 (Microsoft.Azure.DocumentDB)
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
```csharp
-// Create a new container with TTL enabled and a 90 day expiration
-DocumentCollection collectionDefinition = new DocumentCollection();
-collectionDefinition.Id = "myContainer";
-collectionDefinition.PartitionKey.Paths.Add("/myPartitionKey");
-collectionDefinition.DefaultTimeToLive = 90 * 60 * 60 * 24 // expire all documents after 90 days
-
-DocumentCollection ttlEnabledCollection = await client.CreateDocumentCollectionAsync(
- UriFactory.CreateDatabaseUri("myDatabaseName"),
- collectionDefinition;
-```
+Database database = client.GetDatabase("database");
-# [.NET SDK V3](#tab/dotnetv3)
-
-.NET SDK V3 (Microsoft.Azure.Cosmos)
-
-```csharp
-// Create a new container with TTL enabled and a 90 day expiration
-await client.GetDatabase("database").CreateContainerAsync(new ContainerProperties
+ContainerProperties properties = new ()
{ Id = "container",
- PartitionKeyPath = "/myPartitionKey",
- DefaultTimeToLive = 90 * 60 * 60 * 24 // expire all documents after 90 days
-});
-```
--
-### <a id="java-enable-defaultexpiry"></a> Java SDK
+ PartitionKeyPath = "/customerId",
+ // Expire all documents after 90 days
+ DefaultTimeToLive = 90 * 60 * 60 * 24
+};
-# [Java SDK V4](#tab/javav4)
+// Create a new container with TTL enabled and without any expiration value
+Container container = await database
+ .CreateContainerAsync(properties);
+```
-Java SDK V4 (Maven com.azure::azure-cosmos)
+### [Java SDK v4](#tab/javav4)
```java
-CosmosAsyncContainer container;
+CosmosDatabase database = client.getDatabase("database");
+
+CosmosContainerProperties properties = new CosmosContainerProperties(
+ "container",
+ "/customerId"
+);
+// Expire all documents after 90 days
+properties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24);
-// Create a new container with TTL enabled with default expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-containerProperties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24);
-container = database.createContainerIfNotExists(containerProperties, 400).block().getContainer();
+CosmosContainerResponse response = database
+ .createContainerIfNotExists(properties);
```
-# [Java SDK V3](#tab/javav3)
+### [Node SDK](#tab/node-sdk)
-Java SDK V3 (Maven com.microsoft.azure::azure-cosmos)
+```javascript
+const database = await client.database("database");
-```java
-CosmosContainer container;
+const properties = {
+ id: "container",
+ partitionKey: "/customerId",
+ // Expire all documents after 90 days
+ defaultTtl: 90 * 60 * 60 * 24
+};
-// Create a new container with TTL enabled with default expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-containerProperties.defaultTimeToLive(90 * 60 * 60 * 24);
-container = database.createContainerIfNotExists(containerProperties, 400).block().container();
+const { container } = await database.containers
+ .createIfNotExists(properties);
```-
-### <a id="nodejs-enable-withexpiry"></a>NodeJS SDK
+### [Python SDK](#tab/python-sdk)
-```javascript
-const containerDefinition = {
- id: "sample container1",
- };
-
-async function createcontainerWithTTL(db: Database, containerDefinition: ContainerDefinition, collId: any, defaultTtl: number) {
- containerDefinition.id = collId;
- containerDefinition.defaultTtl = defaultTtl;
- await db.containers.create(containerDefinition);
-}
+```python
+database = client.get_database_client('database')
+
+database.create_container(
+ id='container',
+ partition_key=PartitionKey(path='/customerId'),
+ # Expire all documents after 90 days
+ default_ttl=90 * 60 * 60 * 24
+)
```
-## Set time to live on an item
++
+## Set time to live on an item using the Portal
In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container.
-* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item should not expire.
+* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item shouldn't expire.
* If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item. * If TTL is disabled at the container level, the TTL field on the item will be ignored until TTL is re-enabled on the container.
-### <a id="portal-set-ttl-item"></a>Azure portal
- Use the following steps to enable time to live on an item: 1. Sign in to the [Azure portal](https://portal.azure.com/).
Use the following steps to enable time to live on an item:
4. Select an existing container, expand it and modify the following values:
- * Open the **Scale & Settings** window.
- * Under **Setting** find, **Time to Live**.
- * Select **On (no default)** or select **On** and set a TTL value.
- * Click **Save** to save the changes.
+ * Open the **Scale & Settings** window.
+ * Under **Setting** find, **Time to Live**.
+ * Select **On (no default)** or select **On** and set a TTL value.
+ * Select **Save** to save the changes.
5. Next navigate to the item for which you want to set time to live, add the `ttl` property and select **Update**.
- ```json
- {
- "id": "1",
- "_rid": "Jic9ANWdO-EFAAAAAAAAAA==",
- "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/",
- "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"",
- "_attachments": "attachments/",
- "ttl": 10,
- "_ts": 1551307496
- }
- ```
-
-### <a id="dotnet-set-ttl-item"></a>.NET SDK (any)
+ ```json
+ {
+ "id": "1",
+ "_rid": "Jic9ANWdO-EFAAAAAAAAAA==",
+ "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/",
+ "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"",
+ "_attachments": "attachments/",
+ "ttl": 10,
+ "_ts": 1551307496
+ }
+ ```
-```csharp
-// Include a property that serializes to "ttl" in JSON
-public class SalesOrder
-{
- [JsonProperty(PropertyName = "id")]
- public string Id { get; set; }
- [JsonProperty(PropertyName="cid")]
- public string CustomerId { get; set; }
- // used to set expiration policy
- [JsonProperty(PropertyName = "ttl", NullValueHandling = NullValueHandling.Ignore)]
- public int? ttl { get; set; }
-
- //...
-}
-// Set the value to the expiration in seconds
-SalesOrder salesOrder = new SalesOrder
-{
- Id = "SO05",
- CustomerId = "CO18009186470",
- ttl = 60 * 60 * 24 * 30; // Expire sales orders in 30 days
-};
-```
+## Set time to live on an item using an SDK
-### <a id="nodejs-set-ttl-item"></a>NodeJS SDK
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-```javascript
-const itemDefinition = {
- id: "doc",
- name: "sample Item",
- key: "value",
- ttl: 2
- };
+```csharp
+public record SalesOrder(string id, string customerId, int? ttl);
```
-### <a id="java-set-ttl-item"></a> Java SDK
-
-# [Java SDK V4](#tab/javav4)
-
-Java SDK V4 (Maven com.azure::azure-cosmos)
-
-```java
-// Include a property that serializes to "ttl" in JSON
-public class SalesOrder
-{
- private String id;
- private String customerId;
- private Integer ttl;
-
- public SalesOrder(String id, String customerId, Integer ttl) {
- this.id = id;
- this.customerId = customerId;
- this.ttl = ttl;
- }
-
- public String getId() {return this.id;}
- public void setId(String new_id) {this.id = new_id;}
- public String getCustomerId() {return this.customerId;}
- public void setCustomerId(String new_cid) {this.customerId = new_cid;}
- public Integer getTtl() {return this.ttl;}
- public void setTtl(Integer new_ttl) {this.ttl = new_ttl;}
-
- //...
-}
+```csharp
+Container container = database.GetContainer("container");
-// Set the value to the expiration in seconds
-SalesOrder salesOrder = new SalesOrder(
- "SO05",
- "CO18009186470",
- 60 * 60 * 24 * 30 // Expire sales orders in 30 days
+SalesOrder item = new (
+ "SO05",
+ "CO18009186470"
+ // Expire sales order in 30 days using "ttl" property
+ ttl: 60 * 60 * 24 * 30
);
+await container.CreateItemAsync<SalesOrder>(item);
```
-# [Java SDK V3](#tab/javav3)
-
-Java SDK V3 (Maven com.microsoft.azure::azure-cosmos)
+### [Java SDK v4](#tab/javav4)
```java
-// Include a property that serializes to "ttl" in JSON
-public class SalesOrder
-{
- private String id;
- private String customerId;
- private Integer ttl;
-
- public SalesOrder(String id, String customerId, Integer ttl) {
- this.id = id;
- this.customerId = customerId;
- this.ttl = ttl;
- }
+public class SalesOrder {
- public String id() {return this.id;}
- public void id(String new_id) {this.id = new_id;}
- public String customerId() {return this.customerId;}
- public void customerId(String new_cid) {this.customerId = new_cid;}
- public Integer ttl() {return this.ttl;}
- public void ttl(Integer new_ttl) {this.ttl = new_ttl;}
+ public String id;
- //...
-}
+ public String customerId;
-// Set the value to the expiration in seconds
-SalesOrder salesOrder = new SalesOrder(
- "SO05",
- "CO18009186470",
- 60 * 60 * 24 * 30 // Expire sales orders in 30 days
-);
+ // Include a property that serializes to "ttl" in JSON
+ public Integer ttl;
+}
```--
-## Reset time to live
-
-You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field.
-### <a id="dotnet-extend-ttl-item"></a> .NET SDK
-
-# [.NET SDK V2](#tab/dotnetv2)
+```java
+CosmosContainer container = database.getContainer("container");
-.NET SDK V2 (Microsoft.Azure.DocumentDB)
+SalesOrder item = new SalesOrder();
+item.id = "SO05";
+item.customerId = "CO18009186470";
+// Expire sales order in 30 days using "ttl" property
+item.ttl = 60 * 60 * 24 * 30;
-```csharp
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-response = await client.ReadDocumentAsync(
- "/dbs/salesdb/colls/orders/docs/SO05"),
- new RequestOptions { PartitionKey = new PartitionKey("CO18009186470") });
-
-Document readDocument = response.Resource;
-readDocument.ttl = 60 * 30 * 30; // update time to live
-response = await client.ReplaceDocumentAsync(readDocument);
+container.createItem(item);
```
-# [.NET SDK V3](#tab/dotnetv3)
+### [Node SDK](#tab/node-sdk)
-.NET SDK V3 (Microsoft.Azure.Cosmos)
+```javascript
+const container = await database.container("container");
-```csharp
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-ItemResponse<SalesOrder> itemResponse = await client.GetContainer("database", "container").ReadItemAsync<SalesOrder>("SO05", new PartitionKey("CO18009186470"));
+const item = {
+ id: 'SO05',
+ customerId: 'CO18009186470',
+ // Expire sales order in 30 days using "ttl" property
+ ttl: 60 * 60 * 24 * 30
+};
-itemResponse.Resource.ttl = 60 * 30 * 30; // update time to live
-await client.GetContainer("database", "container").ReplaceItemAsync(itemResponse.Resource, "SO05");
+await container.items.create(item);
```-
-### <a id="java-enable-modifyitemexpiry"></a> Java SDK
+### [Python SDK](#tab/python-sdk)
-# [Java SDK V4](#tab/javav4)
+```python
+container = database.get_container_client('container')
-Java SDK V4 (Maven com.azure::azure-cosmos)
+item = {
+ 'id': 'SO05',
+ 'customerId': 'CO18009186470',
+ # Expire sales order in 30 days using "ttl" property
+ 'ttl': 60 * 60 * 24 * 30
+}
-```java
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-CosmosAsyncItemResponse<SalesOrder> itemResponse = container.readItem("SO05", new PartitionKey("CO18009186470"), SalesOrder.class)
- .flatMap(readResponse -> {
- SalesOrder salesOrder = readResponse.getItem();
- salesOrder.setTtl(60 * 30 * 30);
- return container.createItem(salesOrder);
-}).block();
+container.create_item(body=item)
```
-# [Java SDK V3](#tab/javav3)
-
-SDK V3 (Maven com.microsoft.azure::azure-cosmos)
-
-```java
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-container.getItem("SO05", new PartitionKey("CO18009186470")).read()
- .flatMap(readResponse -> {
- SalesOrder salesOrder = null;
- try {
- salesOrder = readResponse.properties().getObject(SalesOrder.class);
- } catch (Exception err) {
-
- }
- salesOrder.ttl(60 * 30 * 30);
- return container.createItem(salesOrder);
-}).block();
-```
-## Turn off time to live
-
-If time to live has been set on an item and you no longer want that item to expire, then you can get the item, remove the TTL field, and replace the item on the server. When the TTL field is removed from the item, the default TTL value assigned to the container is applied to the item. Set the TTL value to -1 to prevent an item from expiring and to not inherit the TTL value from the container.
-
-### <a id="dotnet-turn-off-ttl-item"></a> .NET SDK
+## Reset time to live using an SDK
-# [.NET SDK V2](#tab/dotnetv2)
+You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field.
-.NET SDK V2 (Microsoft.Azure.DocumentDB)
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
```csharp
-// This examples leverages the Sales Order class above.
-// Read a document, turn off its override TTL, save it.
-response = await client.ReadDocumentAsync(
- "/dbs/salesdb/colls/orders/docs/SO05"),
- new RequestOptions { PartitionKey = new PartitionKey("CO18009186470") });
+SalesOrder item = await container.ReadItemAsync<SalesOrder>(
+ "SO05",
+ new PartitionKey("CO18009186470")
+);
-Document readDocument = response.Resource;
-readDocument.ttl = null; // inherit the default TTL of the container
+// Update ttl to 2 hours
+SalesOrder modifiedItem = item with {
+ ttl = 60 * 60 * 2
+};
-response = await client.ReplaceDocumentAsync(readDocument);
+await container.ReplaceItemAsync<SalesOrder>(
+ modifiedItem,
+ "SO05",
+ new PartitionKey("CO18009186470")
+);
```
-# [.NET SDK V3](#tab/dotnetv3)
+### [Java SDK v4](#tab/javav4)
-.NET SDK V3 (Microsoft.Azure.Cosmos)
+```java
+CosmosItemResponse<SalesOrder> response = container.readItem(
+ "SO05",
+ new PartitionKey("CO18009186470"),
+ SalesOrder.class
+);
-```csharp
-// This examples leverages the Sales Order class above.
-// Read a document, turn off its override TTL, save it.
-ItemResponse<SalesOrder> itemResponse = await client.GetContainer("database", "container").ReadItemAsync<SalesOrder>("SO05", new PartitionKey("CO18009186470"));
+SalesOrder item = response.getItem();
-itemResponse.Resource.ttl = null; // inherit the default TTL of the container
-await client.GetContainer("database", "container").ReplaceItemAsync(itemResponse.Resource, "SO05");
-```
--
-### <a id="java-enable-itemdefaultexpiry"></a> Java SDK
+// Update ttl to 2 hours
+item.ttl = 60 * 60 * 2;
-# [Java SDK V4](#tab/javav4)
+CosmosItemRequestOptions options = new CosmosItemRequestOptions();
+container.replaceItem(
+ item,
+ "SO05",
+ new PartitionKey("CO18009186470"),
+ options
+);
+```
-Java SDK V4 (Maven com.azure::azure-cosmos)
+### [Node SDK](#tab/node-sdk)
-```java
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-CosmosAsyncItemResponse<SalesOrder> itemResponse = container.readItem("SO05", new PartitionKey("CO18009186470"), SalesOrder.class)
- .flatMap(readResponse -> {
- SalesOrder salesOrder = readResponse.getItem();
- salesOrder.setTtl(null);
- return container.createItem(salesOrder);
-}).block();
+```javascript
+const { resource: item } = await container.item(
+ 'SO05',
+ 'CO18009186470'
+).read();
+
+// Update ttl to 2 hours
+item.ttl = 60 * 60 * 2;
+
+await container.item(
+ 'SO05',
+ 'CO18009186470'
+).replace(item);
```
-# [Java SDK V3](#tab/javav3)
+### [Python SDK](#tab/python-sdk)
-Java SDK V3 (Maven com.microsoft.azure::azure-cosmos)
+```python
+item = container.read_item(
+ item='SO05',
+ partition_key='CO18009186470'
+)
-```java
-// This examples leverages the Sales Order class above.
-// Read a document, update its TTL, save it.
-container.getItem("SO05", new PartitionKey("CO18009186470")).read()
- .flatMap(readResponse -> {
- SalesOrder salesOrder = null;
- try {
- salesOrder = readResponse.properties().getObject(SalesOrder.class);
- } catch (Exception err) {
-
- }
- salesOrder.ttl(null);
- return container.createItem(salesOrder);
-}).block();
+# Update ttl to 2 hours
+item['ttl'] = 60 * 60 * 2
+
+container.replace_item(
+ item='SO05',
+ body=item
+)
```+
-## Disable time to live
+## Disable time to live using an SDK
To disable time to live on a container and stop the background process from checking for expired items, the `DefaultTimeToLive` property on the container should be deleted. Deleting this property is different from setting it to -1. When you set it to -1, new items added to the container will live forever, however you can override this value on specific items in the container. When you remove the TTL property from the container the items will never expire, even if there are they have explicitly overridden the previous default TTL value.
-### <a id="dotnet-disable-ttl"></a> .NET SDK
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-# [.NET SDK V2](#tab/dotnetv2)
+```csharp
+ContainerProperties properties = await container.ReadContainerAsync();
-.NET SDK V2 (Microsoft.Azure.DocumentDB)
+// Disable ttl at container-level
+properties.DefaultTimeToLive = null;
-```csharp
-// Get the container, update DefaultTimeToLive to null
-DocumentCollection collection = await client.ReadDocumentCollectionAsync("/dbs/salesdb/colls/orders");
-// Disable TTL
-collection.DefaultTimeToLive = null;
-await client.ReplaceDocumentCollectionAsync(collection);
+await container.ReplaceContainerAsync(properties);
```
-# [.NET SDK V3](#tab/dotnetv3)
+### [Java SDK v4](#tab/javav4)
-.NET SDK V3 (Microsoft.Azure.Cosmos)
+```java
+CosmosContainerResponse response = container.read();
+CosmosContainerProperties properties = response.getProperties();
-```csharp
-// Get the container, update DefaultTimeToLive to null
-ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync();
-// Disable TTL
-containerResponse.Resource.DefaultTimeToLive = null;
-await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource);
+// Disable ttl at container-level
+properties.setDefaultTimeToLiveInSeconds(null);
+
+container.replace(properties);
```-
-### <a id="java-enable-disableexpiry"></a> Java SDK
+### [Node SDK](#tab/node-sdk)
-# [Java SDK V4](#tab/javav4)
+```javascript
+const { resource: definition } = await container.read();
-Java SDK V4 (Maven com.azure::azure-cosmos)
+// Disable ttl at container-level
+definition.defaultTtl = null;
-```java
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-// Disable TTL
-containerProperties.setDefaultTimeToLiveInSeconds(null);
-// Update container settings
-container.replace(containerProperties).block();
+await container.replace(definition);
```
-# [Java SDK V3](#tab/javav3)
+### [Python SDK](#tab/python-sdk)
-Java SDK V3 (Maven com.microsoft.azure::azure-cosmos)
-
-```java
-CosmosContainer container;
-
-// Create a new container with TTL enabled and without any expiration value
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-// Disable TTL
-containerProperties.defaultTimeToLive(null);
-// Update container settings
-container = database.createContainerIfNotExists(containerProperties, 400).block().container();
+```python
+database.replace_container(
+ container,
+ partition_key=PartitionKey(path='/id'),
+ # Disable ttl at container-level
+ default_ttl=None
+)
```+ ## Next steps
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-sink.md
For more information on using this SMT, see the [InsertUUID repository](https://
### Using SMTs to configure Time to live (TTL)
-Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-azure-portal) doc.
+Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-the-azure-portal) doc.
Inside your Sink connector config, add the following properties to set the TTL in seconds. In this following example, the TTL is set to 100 seconds. If the message already contains the `TTL` field, the `TTL` value will be overwritten by these SMTs.
cost-management-billing Cost Management Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md
+
+ Title: Troubleshoot common Cost Management errors
+
+description: This article describes common Cost Management errors and provides information about solutions.
+++++ Last updated : 05/13/2022+++
+# Troubleshoot common Cost Management errors
+
+This article describes common Cost Management errors and provides information about solutions. When you use Cost Management in the Azure portal and encounter an error that you don't understand or can't resolve, find the error code below. Then try to use the mitigation information or the more information link to resolve the problem.
+
+Here's a list of common error codes with mitigation information.
+
+If the information provided doesn't help you, [Create a support request](#create-a-support-request).
+
+## 400
+
+Error message `400`.
+<a name="400"></a>
+
+### Mitigation
+
+If you're using the [BillingPeriods](/rest/api/consumption/#getting-list-of-billing-periods) API, confirm that you're using a classic pay-as-you-go or EA subscription. The BillingPeriods API doesn't support Microsoft Customer Agreement subscriptions.
+
+Confirm that you're using a supported scope for the specific feature or subscription offer type.
+
+There are many feature-specific errors that use the `400` error code. Refer to the error message and API documentation for specific details. For general information, see [Cost Management APIs](/rest/api/cost-management).
+
+### More information
+
+For more information about billing periods when transitioning to a Microsoft Customer Agreement, see [Billing period](../understand/mca-understand-your-invoice.md#billing-period).
+
+## 401
+
+Error message `401`.
+
+<a name="401"></a>
+
+### Mitigation
+
+For an Enterprise Agreement, confirm that the view charges options (Account Owner or Department Administrator) have been enabled.
+
+For a Microsoft Customer Agreement, confirm that the billing account owner has assigned you to a role that can view charges.
+
+See [AuthorizationFailed](#AuthorizationFailed).
+
+### More information
+
+For more information about enterprise agreements, see [Troubleshoot enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+
+For more information about Microsoft Customer Agreements, see [Understand Microsoft Customer Agreement administrative roles in Azure](../manage/understand-mca-roles.md).
+
+## 404
+
+Error message `404`.
+
+<a name="404"></a>
+
+### Mitigation
+
+Confirm that you're using a supported scope for the specific feature or supported subscription offer type.
+
+Also, see [NotFound](#NotFound).
+
+## 500
+
+Error message `500`.
+
+<a name="500"></a>
+
+### Mitigation
+
+This message is an internal error. Wait an hour and try again.
+
+Also, see [GatewayTimeout](#GatewayTimeout).
+
+## 503
+
+Error message `503`.
+
+<a name="503"></a>
+
+### Mitigation
+
+This message is an internal error. Wait an hour and try again.
+
+When creating or updating exports, you might view the error when the Microsoft.CostManagementExports resource provider is being registered for your subscription. Resource provider registration is quick, but you may need to wait up to five minutes. If you still see the error for more than 10 minutes, [create a support request](#create-a-support-request).
+
+Also, see [GatewayTimeout](#GatewayTimeout).
+
+## AccountCostDisabled
+
+Error message `AccountCostDisabled`.
+
+<a name="AccountCostDisabled"></a>
+
+### Mitigation
+
+The message indicates that the Enterprise Agreement administrator hasn't enabled Cost Management (view charges) for account owners and subscription users. Contact your administrator.
+
+### More information
+
+For more information, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+
+## AuthorizationFailed
+
+Error message `AuthorizationFailed`.
+
+<a name="AuthorizationFailed"></a>
+
+### Mitigation
+
+Confirm that you have access to the specified scope or object. For example, budget or export.
+
+### More information
+
+For more information, see [Assign access to Cost Management data](assign-access-acm-data.md)
+
+## BadRequest
+
+Error message `BadRequest`.
+
+<a name="BadRequest"></a>
+
+### Mitigation
+
+When using the Query or Forecast APIs to retrieve cost data, validate the query body.
+
+When using portal experiences and you see the `object ID cannot be null` error, try refreshing your view.
+
+Also, see [SubscriptionTypeNotSupported](#SubscriptionTypeNotSupported).
+
+### More information
+
+For more information about the Query - Usage API body examples, see [Query - Usage](/rest/api/cost-management/query/usage).
+
+For more information about the Forecast - Usage API body examples, see [Forecast - Usage](/rest/api/cost-management/forecast/usage).
+
+## BillingAccessDenied
+
+Error message `BillingAccessDenied`.
+
+<a name="BillingAccessDenied"></a>
+
+### Mitigation
+
+See [AuthorizationFailed](#AuthorizationFailed).
+
+## DepartmentCostDisabled
+
+Error message `DepartmentCostDisabled`.
+
+<a name="DepartmentCostDisabled"></a>
+
+### Mitigation
+
+The message indicates that the Enterprise Agreement administrator hasn't enabled Cost Management (DA view charges) for department admins. Contact your EA administrator.
+
+### More information
+
+For more information about troubleshooting disabled costs, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+
+## DisallowedOperation
+
+Error message `DisallowedOperation`.
+
+<a name="DisallowedOperation"></a>
+
+### Mitigation
+
+The message indicates that the subscription doesn't have any charges. The type of subscription that you're using isn't allowed to incur charges. Because the subscription can't have any billed charges, it isn't supported by Cost Management.
+
+## FailedDependency
+
+Error message `FailedDependency`.
+
+<a name="FailedDependency"></a>
+
+### Mitigation
+
+When you're using the Forecast API, the error indicates that there's either not enough data to generate an accurate forecast. Or, there are multiple currencies that can't be merged.
+
+If you have multiple currencies, filter down to charges that only have one currency or request an aggregation of **CostUSD** instead of **Cost** to get a forecast normalized to USD.
+
+If there's not enough historical data, wait one week since you first had charges on the scope to see a forecast.
+
+### More information
+
+For more information about the API, see [Forecast - Usage](/rest/api/cost-management/forecast/usage).
+
+## GatewayTimeout
+
+Error message `GatewayTimeout`.
+
+<a name="GatewayTimeout"></a>
+
+### Mitigation
+
+The message is an internal error. Wait an hour and try again.
+
+When querying for cost data using the Query, Forecast, or Publish APIs, consider simplifying your query with less group by columns or using a lower-level scope. Avoid using large management groups with more than 50 subscriptions.
+
+## IndirectCostDisabled
+
+Error message `IndirectCostDisabled`.
+
+<a name="IndirectCostDisabled"></a>
+
+### Mitigation
+
+The message indicates that your partner hasn't published pricing for the Enterprise Agreement enrollment, which is required to use Cost Management. Contact your partner.
+
+### More information
+
+For more information, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+
+## InvalidAuthenticationTokenTenant
+
+Error message `InvalidAuthenticationTokenTenant`.
+
+<a name="InvalidAuthenticationTokenTenant"></a>
+
+### Mitigation
+
+The subscription you're accessing might have been moved to a different directory.
+
+When using the Azure portal, you might have used a link or saved reference, like a dashboard tile, before the subscription was moved.
+
+Switch to the correct directory that was mentioned in the error message and try again. Don't forget to remove any old references and update any links.
+
+## InvalidGatewayHost
+
+Error message `InvalidGatewayHost`.
+
+<a name="InvalidGatewayHost"></a>
+
+### Mitigation
+
+The message is an internal error. Try again in five minutes. If the error continues, [create a support request](#create-a-support-request).
+
+## InvalidScheduledActionEmailRecipients
+
+Error message `InvalidScheduledActionEmailRecipients`.
+
+<a name="InvalidScheduledActionEmailRecipients"></a>
+
+### Mitigation
+
+The message indicates that the scheduled action/email for an alert that you're creating or updating doesn't have any email recipients. When using the Azure portal, press ENTER after specifying an email address to ensure it's saved in the form.
+
+## InvalidView
+
+Error message `InvalidView`.
+
+<a name="InvalidView"></a>
+
+### Mitigation
+
+The message indicates that the view specified when creating or updating an alert with the ScheduledActions API isn't valid.
+
+When configuring anomaly alerts, make sure you use a kind value of **InsightAlert**.
+
+## MissingSubscription
+
+Error message `MissingSubscription`.
+
+<a name="MissingSubscription"></a>
+
+### Mitigation
+
+The message indicates that the HTTP request didn't include a valid scope.
+
+If using the Azure portal, [create a support request](#create-a-support-request). The error is likely caused by an internal problem.
+
+## NotFound
+
+Error message `NotFound`.
+
+<a name="NotFound"></a>
+
+### Mitigation
+
+If using a subscription or resource group, see [SubscriptionNotFound](#SubscriptionNotFound).
+
+If using a management group, see [SubscriptionTypeNotSupported](#SubscriptionTypeNotSupported).
+
+If using Cost Management in the Azure portal, try refreshing the page. The error may be caused by an old reference to a deleted object within the system, like a budget or connector.
+
+For any other cases, validate the scope or resource ID.
+
+### More information
+
+For more information, see [Assign access to Cost Management data](assign-access-acm-data.md).
+
+## RBACAccessDenied
+
+Error message `RBACAccessDenied`.
+
+<a name="RBACAccessDenied"></a>
+
+### Mitigation
+
+For mitigation information, see [AuthorizationFailed](#AuthorizationFailed).
+
+## ReadOnlyDisabledSubscription
+
+Error message `ReadOnlyDisabledSubscription`.
+
+<a name="ReadOnlyDisabledSubscription"></a>
+
+### Mitigation
+
+The subscription is disabled. You can't create or update Cost Management objects, like budgets and views, for a disabled subscription.
+
+### More information
+
+For more information, see [Reactivate a disabled Azure subscription](../manage/subscription-disabled.md).
+
+## ResourceGroupNotFound
+
+<a name="ResourceGroupNotFound"></a>
+
+### Mitigation
+
+The error indicates that a resource group doesn't exist. The resource group might be moved or deleted.
+
+If using the Azure portal, you might see the error when creating budgets or exports. The error is expected and you can ignore it.
+
+## ResourceRequestsThrottled
+
+Error message `ResourceRequestsThrottled`.
+
+<a name="ResourceRequestsThrottled"></a>
+
+### Mitigation
+
+The error is caused by excessive use within a short timeframe. Wait five minutes and try again.
+
+### More information
+
+For more information, see [Error code 429 - Call count has exceeded rate limits](manage-automation.md#error-code-429call-count-has-exceeded-rate-limits).
+
+## ServerTimeout
+
+Error message `ServerTimeout`.
+
+<a name="ServerTimeout"></a>
+
+### Mitigation
+
+For mitigation information, see [GatewayTimeout](#GatewayTimeout).
+
+## SubscriptionNotFound
+
+Error message `SubscriptionNotFound`.
+
+<a name="SubscriptionNotFound"></a>
+
+### Mitigation
+
+- Validate that the subscription ID is correct.
+- Confirm that you have a supported subscription type.
+
+If using Cost Management for a newly created subscription, wait 48 hours and try again.
+
+### More information
+
+Supported subscription types are shown at [Understand Cost Management data](understand-cost-mgt-data.md).
+
+## SubscriptionTypeNotSupported
+
+Error message `SubscriptionTypeNotSupported`.
+
+<a name="SubscriptionTypeNotSupported"></a>
+
+### Mitigation
+
+If using a management group, verify that all subscriptions have a supported offer type. Cost Management doesn't support management groups with Microsoft Customer Agreement subscriptions.
+
+### More information
+
+Supported subscription types are shown at [Understand Cost Management data](understand-cost-mgt-data.md).
+
+## Unauthorized
+
+Error message `Unauthorized`.
+
+<a name="Unauthorized"></a>
+
+### Mitigation
+
+If using the ExternalBillingAccounts or ExternalSubscriptions APIs, verify that the Microsoft.CostManagement resource providerRP was [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) for your Azure Active Directory instance. Resource Provider registration is required to use Cost Management for AWS.
+
+If you get an `Empty GUID user id` error, update the bearer token associated with the request. You might temporarily see the error in the Azure portal, but it should resolve itself. If you continue to see the error in the Azure portal, refresh your browser.
+
+Also, see [AuthorizationFailed](#AuthorizationFailed).
+
+### More information
+
+For more information, see [Set up AWS integration with Cost Management](aws-integration-set-up-configure.md).
+
+## Create a support request
+
+If you're facing an error not listed above or need more help, file a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md) and specify the issue type as **Billing**.
+
+## Next steps
+
+- Read the [Cost Management + Billing frequently asked questions (FAQ)](../cost-management-billing-faq.yml).
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 03/22/2022 Last updated : 05/13/2022
To create or update action groups, select **Manage action group** while you're c
Next, select **Add action group** and create the action group.
-Budget integration with action groups only works for action groups that have the common alert schema disabled. For more information about disabling the schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema)
+Budget integration with action groups works for action groups which have enabled or disabled common alert schema. For more information on how to enable common alert schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema)
## Create and edit budgets with PowerShell
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
Previously updated : 12/20/2021 Last updated : 05/12/2022 # Copy and transform data in Azure Database for MySQL using Azure Data Factory or Synapse Analytics
The below table lists the properties supported by Azure Database for MySQL sink.
> 1. It's recommended to break single batch scripts with multiple commands into multiple batches. > 2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can be run as part of a batch. Learn more from [Performing batch operations](/sql/connect/jdbc/performing-batch-operations)
+* Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed.
+
+* Incremental date column: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
+
+* Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on.
+ #### Azure Database for MySQL sink script example When you use Azure Database for MySQL as sink type, the associated data flow script is:
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md
The below table lists the properties supported by Azure Database for PostgreSQL
> 1. It's recommended to break single batch scripts with multiple commands into multiple batches. > 2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can be run as part of a batch. Learn more from [Performing batch operations](/sql/connect/jdbc/performing-batch-operations) +
+* Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed.
+
+* Incremental date column: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
+
+* Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on.
+ #### Azure Database for PostgreSQL sink script example When you use Azure Database for PostgreSQL as sink type, the associated data flow script is:
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md
Previously updated : 01/20/2022 Last updated : 05/12/2022 # Conditional split transformation in mapping data flow
Use the data flow expression builder to enter an expression for the split condit
### Example
-The below example is a conditional split transformation named `SplitByYear` that takes in incoming stream `CleanData`. This transformation has two split conditions `year < 1960` and `year > 1980`. `disjoint` is false because the data goes to the first matching condition. Every row matching the first condition goes to output stream `moviesBefore1960`. All remaining rows matching the second condition go to output stream `moviesAFter1980`. All other rows flow through the default stream `AllOtherMovies`.
+The below example is a conditional split transformation named `SplitByYear` that takes in incoming stream `CleanData`. This transformation has two split conditions `year < 1960` and `year > 1980`. `disjoint` is false because the data goes to the first matching condition rather than all matching conditions. Every row matching the first condition goes to output stream `moviesBefore1960`. All remaining rows matching the second condition go to output stream `moviesAFter1980`. All other rows flow through the default stream `AllOtherMovies`.
In the service UI, this transformation looks like the below image:
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-delta-lake.md
In this step, you'll create a pipeline that contains a data flow activity.
1. On the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the ADF home page.":::
1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline. 1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Title: Overview of Azure Policy description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Previously updated : 07/27/2021 Last updated : 05/13/2022 ++ # What is Azure Policy?
environment, with the ability to drill down to the per-resource, per-policy gran
helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
+> [!NOTE]
+> For more information on remediation, see
+> [Remediate non-compliant resources with Azure Policy](./how-to/remediate-resources.md).
+ Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. Policy definitions for these common use cases are already available in your Azure environment as built-ins to help you get started.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Resources get changed through the course of daily use, reconfiguration, and even redeployment. Change can come from an individual or by an automated process. Most change is by design, but
-sometimes it isn't. With the **last seven days** of changes, Resource configuration changes enables you to:
+sometimes it isn't. With the **last fourteen days** of changes, Resource configuration changes enables you to:
- Find when changes were detected on an Azure Resource Manager property - For each resource change, see property change details
Monitor.
> [Guest Configuration for VMs](../../policy/concepts/guest-configuration.md). > [!IMPORTANT]
-> Resource configuration changes is in Public Preview and only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Management groups, Subscriptions, and Resource groups. Changes are queryable for seven days.
+> Resource configuration changes only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Subscriptions and Resource groups. Changes are queryable for fourteen days.
## Find detected change events and view change details
Each change resource has the following properties:
- **targetResourceId** - The resourceID of the resource on which the change occurred. - **targetResourceType** - The resource type of the resource on which the change occurred. - **changeType** - Describes the type of change detected for the entire change record. Values are: _Create_, _Update_, and _Delete_. The
- **changes** property dictionary is only included when **changeType** is _Update_. For the _Delete_ case, the change resource will still be maintained as an extension of the deleted resource for seven days, even if the entire Resource group has been deleted. The change resource will not block deletions or impact any existing delete behavior.
+ **changes** property dictionary is only included when **changeType** is _Update_. For the _Delete_ case, the change resource will still be maintained as an extension of the deleted resource for fourteen days, even if the entire Resource group has been deleted. The change resource will not block deletions or impact any existing delete behavior.
- **changes** - Dictionary of the resource properties (with property name as the key) that were updated as part of the change:
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
Script actions can also be published to the Azure Marketplace as an HDInsight ap
A script action is Bash script that runs on the nodes in an HDInsight cluster. Characteristics and features of script actions are as follows: - The Bash script URI (the location to access the file) has to be accessible from the HDInsight resource provider and the cluster.+ - The following are possible storage locations:
- - For regular (non-ESP) clusters:
- - A blob in an Azure Storage account that's either the primary or additional storage account for the HDInsight cluster. HDInsight is granted access to both of these types of storage accounts during cluster creation.
-
- > [!IMPORTANT]
- > Do not rotate the storage key on this Azure Storage account, as it will cause subsequent script actions with scripts stored there to fail.
+ - For regular (non-ESP) clusters:
+
+ - A blob in an Azure Storage account that's either the primary or additional storage account for the HDInsight cluster. HDInsight is granted access to both of these types of storage accounts during cluster creation.
+
+ > [!IMPORTANT]
+ > Do not rotate the storage key on this Azure Storage account, as it will cause subsequent script actions with scripts stored there to fail.
- - Data Lake Storage Gen1: The service principal HDInsight uses to access Data Lake Storage must have read access to the script. The Bash script URI format is `adl://DATALAKESTOREACCOUNTNAME.azuredatalakestore.net/path_to_file`.
+ - Data Lake Storage Gen1: The service principal HDInsight uses to access Data Lake Storage must have read access to the script. The Bash script URI format is `adl://DATALAKESTOREACCOUNTNAME.azuredatalakestore.net/path_to_file`.
- - Data Lake Storage Gen2 is not recommended to use for script actions. `abfs://` is not supported for the Bash script URI. `https://` URIs are possible, but those work for containers that have public access, and the firewall open for the HDInsight Resource Provider, and therefore is not recommended.
+ - Data Lake Storage Gen2 is not recommended to use for script actions. `abfs://` is not supported for the Bash script URI. `https://` URIs are possible, but those work for containers that have public access, and the firewall open for the HDInsight Resource Provider, and therefore is not recommended.
- - A public file-sharing service accessible through `https://` paths. Examples are Azure Blob, GitHub, or OneDrive. For example URIs, see [Example script action scripts](#example-script-action-scripts).
+ - A public file-sharing service accessible through `https://` paths. Examples are Azure Blob, GitHub, or OneDrive. For example URIs, see [Example script action scripts](#example-script-action-scripts).
- For clusters with ESP, the `wasb://` or `wasbs://` or `http[s]://` URIs are supported. - The script actions can be restricted to run on only certain node types. Examples are head nodes or worker nodes.+ - The script actions can be persisted or *ad hoc*. - Persisted script actions must have a unique name. Persisted scripts are used to customize new worker nodes added to the cluster through scaling operations. A persisted script might also apply changes to another node type when scaling operations occur. An example is a head node. - *Ad hoc* scripts aren't persisted. Script actions used during cluster creation are automatically persisted. They aren't applied to worker nodes added to the cluster after the script has run. Then you can promote an *ad hoc* script to a persisted script or demote a persisted script to an *ad hoc* script. Scripts that fail aren't persisted, even if you specifically indicate that they should be. - Script actions can accept parameters that are used by the script during execution.+ - Script actions run with root-level privileges on the cluster nodes.+ - Script actions can be used through the Azure portal, Azure PowerShell, Azure CLI, or HDInsight .NET SDK.+ - Script actions that remove or modify service files on the VM may impact service health and availability. The cluster keeps a history of all scripts that have been run. The history helps when you need to find the ID of a script for promotion or demotion operations.
-> [!IMPORTANT]
+> [!IMPORTANT]
> There's no automatic way to undo the changes made by a script action. Either manually reverse the changes or provide a script that reverses them. ## Permissions
Script actions used during cluster creation are slightly different from script a
The following diagram illustrates when script action runs during the creation process: - :::image type="content" source="./media/hdinsight-hadoop-customize-cluster-linux/cluster-provisioning-states.png" alt-text="Stages during cluster creation" border="false"::: The script runs while HDInsight is being configured. The script runs in parallel on all the specified nodes in the cluster. It runs with root privileges on the nodes.
You can do operations like stopping and starting services, including Apache Hado
During cluster creation, you can use many script actions at once. These scripts are invoked in the order in which they were specified.
-> [!IMPORTANT]
+> [!NOTE]
+> If the script is present in any other storage account other than what is specified as cluster storage (at cluster create time), that will need a public access.
+
+> [!IMPORTANT]
> Script actions must finish within 60 minutes, or they time out. During cluster provisioning, the script runs concurrently with other setup and configuration processes. Competition for resources such as CPU time or network bandwidth might cause the script to take longer to finish than it does in your development environment.
->
+>
> To minimize the time it takes to run the script, avoid tasks like downloading and compiling applications from the source. Precompile applications and store the binary in Azure Storage. ### Script action on a running cluster
EndTime : 8/14/2017 7:41:05 PM
Status : Succeeded ```
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you change the cluster user, admin, password after the cluster is created, script actions run against this cluster might fail. If you have any persisted script actions that target worker nodes, these scripts might fail when you scale the cluster. ## Example script action scripts
This section explains the different ways you can use script actions when you cre
| Bash script URI |Specify the URI of the script. | | Head/Worker/ZooKeeper |Specify the nodes on which the script is run: **Head**, **Worker**, or **ZooKeeper**. | | Parameters |Specify the parameters, if required by the script. |-
+
Use the __Persist this script action__ entry to make sure that the script is applied during scaling operations. 1. Select __Create__ to save the script. Then you can use __+ Submit new__ to add another script.
This section explains how to apply script actions on a running cluster.
| Bash script URI |Specify the URI of the script. | | Head/Worker/Zookeeper |Specify the nodes on which the script is run: **Head**, **Worker**, or **ZooKeeper**. | | Parameters |Specify the parameters, if required by the script. |-
+
Use the __Persist this script action__ entry to make sure the script is applied during scaling operations. 1. Finally, select the **Create** button to apply the script to the cluster.
The following example script demonstrates using the cmdlets to promote and then
### HDInsight .NET SDK
-For an example of using the .NET SDK to retrieve script history from a cluster, promote or demote scripts, see [
-Apply a Script Action against a running Linux-based HDInsight cluster](https://github.com/Azure-Samples/hdinsight-dotnet-script-action).
+For an example of using the .NET SDK to retrieve script history from a cluster, promote or demote scripts, see
+Apply a Script Action against a running Linux-based HDInsight cluster.
-> [!NOTE]
+> [!NOTE]
> This example also demonstrates how to install an HDInsight application by using the .NET SDK. ## Next steps
hdinsight Hdinsight Sales Insights Etl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sales-insights-etl.md
description: Learn how to use create ETL pipelines with Azure HDInsight to deriv
Previously updated : 04/15/2020 Last updated : 05/13/2022 # Tutorial: Create an end-to-end data pipeline to derive sales insights in Azure HDInsight
hdinsight Tutorial Cli Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md
Title: 'Tutorial: Create an Apache Kafka REST proxy enabled cluster in HDInsight
description: Learn how to perform Apache Kafka operations using a Kafka REST proxy on Azure HDInsight. Previously updated : 02/27/2020 Last updated : 05/13/2022
hdinsight Safely Manage Jar Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/safely-manage-jar-dependency.md
description: This article discusses best practices for managing Java Archive (JA
Previously updated : 02/05/2020 Last updated : 05/13/2022 # Safely manage jar dependencies
Then you can run `sbt clean` and `sbt assembly` to build the shaded jar file.
* [Use HDInsight IntelliJ Tools](../hadoop/apache-hadoop-visual-studio-tools-get-started.md)
-* [Create a Scala Maven application for Spark in IntelliJ](./apache-spark-create-standalone-application.md)
+* [Create a Scala Maven application for Spark in IntelliJ](./apache-spark-create-standalone-application.md)
hdinsight Apache Troubleshoot Storm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-troubleshoot-storm.md
description: Get answers to common questions about using Apache Storm with Azure
keywords: Azure HDInsight, Storm, FAQ, troubleshooting guide, common problems Previously updated : 11/08/2019 Last updated : 05/13/2022
If you didn't see your problem or are unable to solve your issue, visit one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Below are some error codes you may encounter and the solutions to help you resol
**Solution:** Reduce the size of your data or consider Azure API for FHIR, which has a higher storage limit.
+## Bulk import - another option
+
+As illustrated in this article, $import is one way of doing bulk import. Another way is using an open-source solution, called [FHIR Bulk Loader](https://github.com/microsoft/fhir-loader). FHIR-Bulk Loader is an Azure Function App solution that provides the following capabilities for ingesting FHIR data:
+
+* Imports FHIR Bundles (compressed and non-compressed) and NDJSON files into a FHIR service
+* High Speed Parallel Event Grid that triggers from storage accounts or other event grid resources
+* Complete Auditing, Error logging and Retry for throttled transactions
+ ## Next steps In this article, you've learned about how the Bulk import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
In this article, you've learned about how the Bulk import feature enables import
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
+>[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-test-certificates.md
In this section, you clone the IoT Edge repo and execute the scripts.
mkdir wrkdir cd .\wrkdir\ cp ..\iotedge\tools\CACertificates\*.cnf .
- cp ..\iotedge\tools\CACertificates\certGen.sh .
+ cp ..\iotedge\tools\CACertificates\ca-certs.ps1 .
``` If you downloaded the repo as a ZIP, then the folder name is `iotedge-master` and the rest of the path is the same.
The certificates in this section are for the steps in the IoT Hub X.509 certific
* `certs/iot-device-<device id>-full-chain.cert.pem` * `private/iot-device-<device id>.key.pem` -+
iot-hub-device-update Troubleshoot Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/troubleshoot-device-update.md
_You may not have access permissions configured correctly. Please ensure you hav
### Q: I'm encountering a 500-type error when importing content to the Device Update service. _An error code in the 500 range may indicate an issue with the Device Update service. Please wait 5 minutes, then try again. If the same error persists, please follow the instructions in the [Contacting Microsoft Support](#contact) section to file a support request with Microsoft._
+### Q: I want to keep the same compatibility properties (target my update to the same device type), but change the Provider or Name in the import manifest. But I get an error "Failed: error importing update due to exceeded limit" when I do so.
+_The same exact set of compatibility properties cannot be used with more than one Update Provider and Name combination. This allows the Device Update service to determine with certainty which updates should be available to deploy to a given device. If you need to update multiple components or partitions on a single device, the [proxy updates](./device-update-proxy-updates.md) feature provides that capability._
+ ### Q: I'm encountering an error message when importing content and would like to understand more about it. _Please refer to the [Device Update Error Codes](./device-update-error-codes.md#device-update-content-service) documentation for more detailed information on import-related error messages._
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
# Customer intent: As a developer I want storage credentials and SAS tokens to be managed securely by Azure Key Vault.
-# Manage storage account keys with Key Vault and Azure PowerShell
+# Manage storage account keys with Key Vault and Azure PowerShell (legacy)
+> [!IMPORTANT]
+> Key Vault Managed Storage Account Keys (legacy) is supported as-is with no more updates planned. Only Account SAS are supported with SAS definitions signed storage service version no later than 2018-03-28.
+ > [!IMPORTANT] > We recommend using Azure Storage integration with Azure Active Directory (Azure AD), Microsoft's cloud-based identity and access management service. Azure AD integration is available for [Azure blobs and queues](../../storage/blobs/authorize-access-azure-active-directory.md), and provides OAuth2 token-based access to Azure Storage (just like Azure Key Vault). > Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use below solution only when Azure AD authentication is not possible.
Tags :
### Enable key regeneration
-If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of three days. When it is time to rotate, Key Vault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time. This is the active key.
+If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of thirty days. When it is time to rotate, Key Vault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time. This is the active key.
```azurepowershell-interactive
-$regenPeriod = [System.Timespan]::FromDays(3)
+$regenPeriod = [System.Timespan]::FromDays(30)
Add-AzKeyVaultManagedStorageAccount -VaultName $keyVaultName -AccountName $storageAccountName -AccountResourceId $storageAccount.Id -ActiveKeyName $storageAccountKey -RegenerationPeriod $regenPeriod ```
AccountName : sacontoso
Account Resource Id : /subscriptions/03f0blll-ce69-483a-a092-d06ea46dfb8z/resourceGroups/rgContoso/providers/Microsoft.Storage/storageAccounts/sacontoso Active Key Name : key1 Auto Regenerate Key : True
-Regeneration Period : 3.00:00:00
+Regeneration Period : 30.00:00:00
Enabled : True Created : 11/19/2018 11:54:47 PM Updated : 11/19/2018 11:54:47 PM
You can also ask Key Vault to generate shared access signature tokens. A shared
The commands in this section complete the following actions: - Set an account shared access signature definition.-- Create an account shared access signature token for Blob, File, Table, and Queue services. The token is created for resource types Service, Container, and Object. The token is created with all permissions, over https, and with the specified start and end dates. - Set a Key Vault managed storage shared access signature definition in the vault. The definition has the template URI of the shared access signature token that was created. The definition has the shared access signature type `account` and is valid for N days. - Verify that the shared access signature was saved in your key vault as a secret.
$keyVaultName = <YourKeyVaultName>
$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -Protocol Https -StorageAccountKey Key1 #(or "Primary" for Classic Storage Account) ```
-### Create a shared access signature token
+### Define a shared access signature definition template
-Create a shared access signature definition using the Azure PowerShell [New-AzStorageAccountSASToken](/powershell/module/az.storage/new-azstorageaccountsastoken) cmdlets.
+Key Vault uses SAS definition template to generate tokens for client applications.
+SAS definition template example:
```azurepowershell-interactive
-$start = [System.DateTime]::Now.AddDays(-1)
-$end = [System.DateTime]::Now.AddMonths(1)
-
-$sasToken = New-AzStorageAccountSasToken -Service blob,file,Table,Queue -ResourceType Service,Container,Object -Permission "racwdlup" -Protocol HttpsOnly -StartTime $start -ExpiryTime $end -Context $storageContext
+$sasTemplate="sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https"
```
-The value of $sasToken will look similar to this.
-```console
-?sv=2018-11-09&sig=5GWqHFkEOtM7W9alOgoXSCOJO%2B55qJr4J7tHQjCId9S%3D&spr=https&st=2019-09-18T18%3A25%3A00Z&se=2019-10-19T18%3A25%3A00Z&srt=sco&ss=bfqt&sp=racupwdl
-```
+#### Account SAS parameters required in SAS definition template for Key Vault
+|SAS Query Parameter|Description|
+|-|--|
+|`SignedVersion (sv)`|Required. Specifies the signed storage service version to use to authorize requests made with this account SAS. Must be set to version 2015-04-05 or later. **Key Vault supports versions no later than 2018-03-28**|
+|`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:<br /><br /> - Blob (`b`)<br />- Queue (`q`)<br />- Table (`t`)<br />- File (`f`)<br /><br /> You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.|
+|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.<br /><br /> - Service (`s`): Access to service-level APIs (*e.g.*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)<br />- Container (`c`): Access to container-level APIs (*e.g.*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)<br />- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*e.g.* Put Blob, Query Entity, Get Messages, Create File, etc.)<br /><br /> You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.|
+|`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they are ignored.<br /><br /> - Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.<br />- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.<br />- Delete (`d`): Valid for Container and Object resource types, except for queue messages.<br />- Permanent Delete (`y`): Valid for Object resource type of Blob only.<br />- List (`l`): Valid for Service and Container resource types only.<br />- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.<br />- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.<br />- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.<br />- Process (`p`): Valid for the following Object resource type only: queue messages.<br/>- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.<br/>- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.<br/>- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.|
+|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only is not a permitted value.|
+
+For more information about account SAS, see:
+[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas)
+
+> [!NOTE]
+> Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version
-### Generate a shared access signature definition
+### Set shared access signature definition in Key Vault
Use the the Azure PowerShell [Set-AzKeyVaultManagedStorageSasDefinition](/powershell/module/az.keyvault/set-azkeyvaultmanagedstoragesasdefinition) cmdlet to create a shared access signature definition. You can provide the name of your choice to the `-Name` parameter. ```azurepowershell-interactive
-Set-AzKeyVaultManagedStorageSasDefinition -AccountName $storageAccountName -VaultName $keyVaultName -Name <YourSASDefinitionName> -TemplateUri $sasToken -SasType 'account' -ValidityPeriod ([System.Timespan]::FromDays(30))
+Set-AzKeyVaultManagedStorageSasDefinition -AccountName $storageAccountName -VaultName $keyVaultName -Name <YourSASDefinitionName> -TemplateUri $sasTemplate -SasType 'account' -ValidityPeriod ([System.Timespan]::FromDays(1))
``` ### Verify the shared access signature definition
key-vault Overview Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys.md
# Customer intent: As a developer, I want to use Azure Key Vault and Azure CLI for secure management of my storage credentials and shared access signature tokens.
-# Manage storage account keys with Key Vault and the Azure CLI
+# Manage storage account keys with Key Vault and the Azure CLI (legacy)
+
+> [!IMPORTANT]
+> Key Vault Managed Storage Account Keys (legacy) is supported as-is with no more updates planned. Only Account SAS are supported with SAS definitions signed storage service version no later than 2018-03-28.
+ > [!IMPORTANT] > We recommend using Azure Storage integration with Azure Active Directory (Azure AD), Microsoft's cloud-based identity and access management service. Azure AD integration is available for [Azure blobs and queues](../../storage/blobs/authorize-access-azure-active-directory.md), and provides OAuth2 token-based access to Azure Storage (just like Azure Key Vault). > Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use below solution only when Azure AD authentication is not possible.
When you use the managed storage account key feature, consider the following poi
## Service principal application ID
-An Azure AD tenant provides each registered application with a [service principal](../../active-directory/develop/developer-glossary.md#service-principal-object). The service principal serves as the Application ID, which is used during authorization setup for access to other Azure resources via Azure RBAC.
+An Azure AD tenant provides each registered application with a [service principal](../../active-directory/develop/developer-glossary.md#service-principal-object). The service principal serves as the Application ID, which is used during authorization setup for access to other Azure resources via Azure role-base access control (Azure RBAC).
Key Vault is a Microsoft application that's pre-registered in all Azure AD tenants. Key Vault is registered under the same Application ID in each Azure cloud.
Key Vault is a Microsoft application that's pre-registered in all Azure AD tenan
## Prerequisites
-To complete this guide, you must first do the following:
+To complete this guide, you must first do the following steps:
- [Install the Azure CLI](/cli/azure/install-azure-cli). - [Create a key vault](quick-create-cli.md)
az login
Use the Azure CLI [az role assignment create](/cli/azure/role/assignment) command to give Key Vault access your storage account. Provide the command the following parameter values: - `--role`: Pass the "Storage Account Key Operator Service Role" Azure role. This role limits the access scope to your storage account. For a classic storage account, pass "Classic Storage Account Key Operator Service Role" instead.-- `--assignee`: Pass the value "https://vault.azure.net", which is the url for Key Vault in the Azure public cloud. (For Azure Goverment cloud use '--assignee-object-id' instead, see [Service principal application ID](#service-principal-application-id).)-- `--scope`: Pass your storage account resource ID, which is in the form `/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>`. To find your subscription ID, use the Azure CLI [az account list](/cli/azure/account?#az-account-list) command; to find your storage account name and storage account resource group, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.
+- `--assignee`: Pass the value "https://vault.azure.net", which is the url for Key Vault in the Azure public cloud. (For Azure Government cloud use '--assignee-object-id' instead, see [Service principal application ID](#service-principal-application-id).)
+- `--scope`: Pass your storage account resource ID, which is in the form `/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>`. Find your subscription ID, by using the Azure CLI [az account list](/cli/azure/account?#az-account-list) command. Find your storage account name and storage account resource group, by using the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.
```azurecli-interactive az role assignment create --role "Storage Account Key Operator Service Role" --assignee "https://vault.azure.net" --scope "/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>"
az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --storage
Note that permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal. ### Create a Key Vault Managed storage account
- Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 90 days. When it is time to rotate, KeyVault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values:
+ Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 30 days. When it is time to rotate, KeyVault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values:
- `--vault-name`: Pass the name of your key vault. To find the name of your key vault, use the Azure CLI [az keyvault list](/cli/azure/keyvault?#az-keyvault-list) command. - `-n`: Pass the name of your storage account. To find the name of your storage account, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.-- `--resource-id`: Pass your storage account resource ID, which is in the form `/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>`. To find your subscription ID, use the Azure CLI [az account list](/cli/azure/account?#az-account-list) command; to find your storage account name and storage account resource group, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.
+- `--resource-id`: Pass your storage account resource ID, which is in the form `/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>`. Find your subscription ID, by using the Azure CLI [az account list](/cli/azure/account?#az-account-list) command. Find your storage account name and storage account resource group, by using the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.
```azurecli-interactive
-az keyvault storage add --vault-name <YourKeyVaultName> -n <YourStorageAccountName> --active-key-name key1 --auto-regenerate-key --regeneration-period P90D --resource-id "/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>"
+az keyvault storage add --vault-name <YourKeyVaultName> -n <YourStorageAccountName> --active-key-name key1 --auto-regenerate-key --regeneration-period P30D --resource-id "/subscriptions/<subscriptionID>/resourceGroups/<StorageAccountResourceGroupName>/providers/Microsoft.Storage/storageAccounts/<YourStorageAccountName>"
``` ## Shared access signature tokens
You can also ask Key Vault to generate shared access signature tokens. A shared
The commands in this section complete the following actions: - Set an account shared access signature definition `<YourSASDefinitionName>`. The definition is set on a Key Vault managed storage account `<YourStorageAccountName>` in your key vault `<YourKeyVaultName>`.-- Create an account shared access signature token for Blob, File, Table, and Queue services. The token is created for resource types Service, Container, and Object. The token is created with all permissions, over https, and with the specified start and end dates. - Set a Key Vault managed storage shared access signature definition in the vault. The definition has the template URI of the shared access signature token that was created. The definition has the shared access signature type `account` and is valid for N days. - Verify that the shared access signature was saved in your key vault as a secret.
-### Create a shared access signature token
+### Define a shared access signature definition template
-Create a shared access signature definition using the Azure CLI [az storage account generate-sas](/cli/azure/storage/account?#az-storage-account-generate-sas) command. This operation requires the `storage` and `setsas` permissions.
--
-```azurecli-interactive
-az storage account generate-sas --expiry 2020-01-01 --permissions rw --resource-types sco --services bfqt --https-only --account-name <YourStorageAccountName> --account-key 00000000
-```
-After the operation runs successfully, copy the output.
+Key Vault uses SAS definition template to generate tokens for client applications.
+SAS definition template example:
```console
-"se=2020-01-01&sp=***"
+"sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https"
```
-This output will be the passed to the `--template-uri` parameter in the next step.
+SAS definition template will be the passed to the `--template-uri` parameter in the next step.
+
+#### Account SAS parameters required in SAS definition template for Key Vault
+
+|SAS Query Parameter|Description|
+|-|--|
+|`SignedVersion (sv)`|Required. Specifies the signed storage service version to use to authorize requests made with this account SAS. Must be set to version 2015-04-05 or later. **Key Vault supports versions no later than 2018-03-28**|
+|`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:<br /><br /> - Blob (`b`)<br />- Queue (`q`)<br />- Table (`t`)<br />- File (`f`)<br /><br /> You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.|
+|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.<br /><br /> - Service (`s`): Access to service-level APIs (*for example*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)<br />- Container (`c`): Access to container-level APIs (*for example*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)<br />- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*for example,* Put Blob, Query Entity, Get Messages, Create File, etc.)<br /><br /> You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.|
+|`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they're ignored.<br /><br /> - Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.<br />- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.<br />- Delete (`d`): Valid for Container and Object resource types, except for queue messages.<br />- Permanent Delete (`y`): Valid for Object resource type of Blob only.<br />- List (`l`): Valid for Service and Container resource types only.<br />- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.<br />- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.<br />- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.<br />- Process (`p`): Valid for the following Object resource type only: queue messages.<br/>- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.<br/>- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.<br/>- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.|
+|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only isn't a permitted value.|
+
+For more information about account SAS, see:
+[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas)
+
+> [!NOTE]
+> Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version
-### Generate a shared access signature definition
+### Set shared access signature definition in Key Vault
-Use the the Azure CLI [az keyvault storage sas-definition create](/cli/azure/keyvault/storage/sas-definition?#az-keyvault-storage-sas-definition-create) command, passing the output from the previous step to the `--template-uri` parameter, to create a shared access signature definition. You can provide the name of your choice to the `-n` parameter.
+Use the Azure CLI [az keyvault storage sas-definition create](/cli/azure/keyvault/storage/sas-definition?#az-keyvault-storage-sas-definition-create) command, passing the SAS definition template from the previous step to the `--template-uri` parameter, to create a shared access signature definition. You can provide the name of your choice to the `-n` parameter.
```azurecli-interactive
-az keyvault storage sas-definition create --vault-name <YourKeyVaultName> --account-name <YourStorageAccountName> -n <YourSASDefinitionName> --validity-period P2D --sas-type account --template-uri <OutputOfSasTokenCreationStep>
+az keyvault storage sas-definition create --vault-name <YourKeyVaultName> --account-name <YourStorageAccountName> -n <YourSASDefinitionName> --validity-period P2D --sas-type account --template-uri <sasDefinitionTemplate>
``` ### Verify the shared access signature definition
key-vault Storage Keys Sas Tokens Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/storage-keys-sas-tokens-code.md
# Customer intent: As a developer I want storage credentials and SAS tokens to be managed securely by Azure Key Vault.
-# Create SAS definition and fetch shared access signature tokens in code
+# Create SAS definition and fetch shared access signature tokens in code (legacy)
You can manage your storage account with shared access signature (SAS) tokens stored in your key vault. For more information, see [Grant limited access to Azure Storage resources using SAS](../../storage/common/storage-sas-overview.md).
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
For information on VM sizes and their cost, see the [Azure Lab Services Pricing]
| Medium (nested virtualization) | 4 | 16 | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization. | Large | 8 | 16 | [Standard_F8s_v2](../virtual-machines/fsv2-series.md) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. | | Large (nested virtualization) | 8 | 32 | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. |
+| Small GPU (Compute) | 6 | 112 | [Standard_NC6s_v3](../virtual-machines/ncv3-series.md) | Best suited for computer-intensive applications such as AI and deep learning. |
| Small GPU (visualization) | 8 | 28 | [Standard_NVas_v4](../virtual-machines/nvv4-series.md) **Windows only* | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | | Medium GPU (visualization) | 12 | 112 | [Standard_NV12s_v3](../virtual-machines/nvv3-series.md) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. |
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
+
+ Title: Network isolation change with our new API platform on Azure Resource Manager
+
+description: 'Explain network isolation changes with our new API platform on Azure Resource Manager and how to maintain network isolation'
+++++++ Last updated : 05/13/2022++
+# Network Isolation Change with Our New API Platform on Azure Resource Manager
+
+In this article, you'll learn about network isolation changes with our new v2 API platform on Azure Resource Manager (ARM) and its effect on network isolation.
+
+## What is the new API platform on Azure Resource Manager (ARM)
+
+There are two types of operations used by the v1 and v2 APIs, __Azure Resource Manager (ARM)__ and __Azure Machine Learning workspace__.
+
+With the v1 API, most operations used the workspace. For v2, we've moved most operations to use public ARM.
+
+| API version | Public ARM | Inside workspace virtual network |
+| -- | -- | -- |
+| v1 | Workspace and compute create, update, and delete (CRUD) operations. | Other operations such as experiments. |
+| v2 | Most operations such as workspace, compute, datastore, dataset, job, environment, code, component, endpoints. | Remaining operations. |
++
+The v2 API provides a consistent API in one place. You can more easily use Azure role-based access control and Azure Policy for resources with the v2 API because it's based on Azure Resource Manager.
+
+The Azure Machine Learning CLI v2 uses our new v2 API platform. New features such as [managed online endpoints](concept-endpoints.md) are only available using the v2 API platform.
+
+## What are the network isolation changes with V2
+
+As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources.
+
+With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/jobs/create-or-update) api sends metadata, and [parameters](/azure/machine-learning/reference-yaml-job-command).
+
+> [!TIP]
+> * Public ARM operations do not surface data in your storage account on public networks.
+> * Your communication with public ARM is encrypted using TLS 1.2.
+
+If you need time to evaluate the new v2 API before adopting it in your enterprise solutions, or have a company policy that prohibits sending communication over public networks, you can enable the *v1_legacy_mode* parameter. When enabled, this parameter disables the v2 API for your workspace.
+
+> [!IMPORTANT]
+> Enabling v1_legacy_mode may prevent you from using features provided by the v2 API. For example, some features of Azure Machine Learning studio may be unavailable.
+
+## Scenarios and Required Actions
+
+> [!WARNING]
+> The *v1_legacy_mode* parameter is available now, but the v2 API blocking functionality will be enforced starting the week of May 15th, 2022.
+
+* If you don't plan on using a private endpoint with your workspace, you don't need to enable parameter.
+
+* If you're OK with operations communicating with public ARM, you don't need to enable the parameter.
+
+* You only need to enable the parameter if you're using a private endpoint with the workspace _and_ don't want to allow operations with ARM over public networks.
+
+Once we implement the parameter, it will be retroactively applied to existing workspaces using the following logic:
+
+* If you have __an existing workspace with a private endpoint__, the flag will be __true__.
+
+* If you have __an existing workspace without a private endpoint__ (public workspace), the flag will be __false__.
+
+After the parameter has been implemented, the default value of the flag depends on the underlying REST API version used when you create a workspace (with a private endpoint):
+
+* If the API version is __older__ than `2022-05-01`, then the flag is __true__ by default.
+* If the API version is `2022-05-01` or __newer__, then the flag is __false__ by default.
+
+> [!IMPORTANT]
+> If you want to use the v2 API with your workspace, you must set the v1_legacy_mode parameter to __false__.
+
+## How to update v1_legacy_mode parameter
+
+> [!WARNING]
+> The *v1_legacy_mode* parameter is available now, but the v2 API blocking functionality will be enforced starting the week of May 15th, 2022.
+
+To update v1_legacy_mode, use the following steps:
+
+# [Python](#tab/python)
+
+To disable v1_legacy_mode, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `v1_legacy_mode=false`.
+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+ws.update(v1_legacy_mode=false)
+```
+
+# [Azure CLI extension v1](#tab/azurecliextensionv1)
+
+The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable the parameter for a workspace, add the parameter `--set v1_legacy_mode=true`.
+++
+## Next steps
+
+* [Use a private endpoint with Azure Machine Learning workspace](how-to-configure-private-link.md).
+* [Create private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Azure Private Link enables you to connect to your workspace using a private endp
> * [Secure training environments](how-to-secure-training-vnet.md). > * [Secure inference environments](how-to-secure-inferencing-vnet.md). > * [Use Azure Machine Learning studio in a VNet](how-to-enable-studio-virtual-network.md).
+> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
## Prerequisites
ws = Workspace.create(name='myworkspace',
# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
-When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an of creating a new workspace using a YAML configuration:
+When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an example of creating a new workspace using a YAML configuration:
> [!TIP] > When using private link, your workspace cannot use Azure Container Registry tasks compute for image building. The `image_build_compute` property in this configuration specifies a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the `public_network_access` property.
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learn
In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. Or you may want to remove the workspace from the VNet and re-enable public access. > [!IMPORTANT]
-> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to is still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints.
+> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to are still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints.
> [!WARNING] > When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources:
If you want to create an isolated Azure Kubernetes Service used by the workspace
* For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article. * If you plan on using a custom DNS solution in your virtual network, see [how to use a workspace with a custom DNS server](how-to-custom-dns.md).+
+* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Secure Azure Machine Learning workspace resources and compute environments using
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
+> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
> > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article, you learn how to secure an Azure Machine Learning workspace and
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
+> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
> > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
This article is part of a series on securing an Azure Machine Learning workflow.
* [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md) * [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md)
-* [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md)
+* [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md)
+* [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
marketplace Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/cloud-solution-providers.md
description: Learn how to sell your offers through the Microsoft Cloud Solution
-- Previously updated : 07/14/2020++ Last updated : 05/10/2022 # Cloud Solution Provider program
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-availability.md
Your selections here apply only to new acquisitions; if someone already has your
Before you publish your offer live to the broader marketplace offer, you'll first need to make it available to a limited **Preview audience**. Enter a **Hide key** (any string using only lowercase letters and/or numbers) here. Members of your preview audience can use this hide key as a token to view a preview of your offer in the marketplace.
-Then, when you're ready to make your offer available and remove the preview restriction, you'll need to remove the **Hide key** and publish again.
+Then, when you're ready to make your offer available and remove the preview restriction, you'll need to remove the **Hide key** and publish the offer again.
Select **Save draft** before continuing to the next tab in the left-nav menu.
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Before you start, create a commercial marketplace account in [Partner Center](./
## Before you begin
-Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it.
+Review [Plan a Microsoft Dynamics 365 offer](marketplace-dynamics-365.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it.
## Create a new offer
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-plans.md
You need to define at least one plan, if your offer has app license management e
## Create a plan
+1. In the left-nav, select **Plan overview**.
1. Near the top of the **Plan overview** page, select **+ Create new plan**. 1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**. 1. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 200 characters.
On the **Plan listing** tab, you can define the plan name and description as you
## Copy the Service IDs
-You need to copy the Service ID of each plan you created so you can map them to your solution package in the next step.
+You need to copy the Service ID of each plan you created so you can map them to your solution package in the next section: Add Service IDs to your solution package.
- For each plan you created, copy the Service ID to a safe place. YouΓÇÖll add them to your solution package in the next step. The service ID is listed on the **Plan overview** page in the form of `ISV name.offer name.plan ID`. For example, Fabrikam.F365.bronze.
You need to copy the Service ID of each plan you created so you can map them to
## Add Service IDs to your solution package
-1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
+1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Add licensing information to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
1. After you create the CRM package .zip file, upload it to Azure Blob Storage. You will need to provide the SAS URL of the Azure Blob Storage account that contains the uploaded CRM package .zip file. ## Next steps
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
- **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28** Azure Database for MySQL - Flexible Server 8.0 now is running on minor version 8.0.28*, to learn more about changes coming in this minor version [visit Changes in MySQL 8.0.28 (2022-01-18, General Availability)](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html) -- **Minor version upgrade for Azure Database for MySQL - Single server to 5.7.37**
+- **Minor version upgrade for Azure Database for MySQL - Flexible server to 5.7.37**
Azure Database for MySQL - Flexible Server 5.7 now is running on minor version 5.7.37*, to learn more about changes coming in this minor version [visit Changes in MySQL 5.7.37 (2022-01-18, General Availability](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html) * Please note that some regions are still running older minor versions of the Azure Database for MySQL and will be patched by end of April 2022.
object-anchors Model Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/model-conversion-error-codes.md
For common modes of model conversion failure, the `Azure.MixedReality.ObjectAnch
| INVALID_JOB_ID | The provided ID for the asset conversion job to be created was set to the default all-zero GUID. | If a GUID is specified when creating an asset conversion job, ensure it is not the default all-zero GUID. | | INVALID_GRAVITY | The gravity vector provided when creating the asset conversion job was a fully zeroed vector. | When starting an asset conversion, provide the gravity vector that corresponds to the uploaded asset. | | INVALID_SCALE | The provided scale factor was not a positive non-zero value. | When starting an asset conversion, provide the scalar value that corresponds to the measurement unit scale (with regard to meters) of the uploaded asset. |
-| ASSET_SIZE_TOO_LARGE | The intermediate .PLY file generated from the asset or its serialized equivalent was too large. | Refer to the asset size guidelines before submitting an asset for conversion to ensure conformity: aka.ms/aoa/faq |
-| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This can be a sign of an improperly set scale for the asset when creating a job. | Refer to the asset size guidelines before submitting an asset for conversion to ensure conformity, and ensure the provided scale corresponds to the uploaded asset: aka.ms/aoa/faq |
+| ASSET_SIZE_TOO_LARGE | The intermediate .PLY file generated from the asset or its serialized equivalent was too large. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
+| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This can be a sign of an improperly set scale for the asset when creating a job. | Inspect the `ScaledAssetDimensions` property in your `AssetConversionProperties` object: it will contain the actual dimensions of the asset that were calculated after applying scale (in meters). Then, refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity, and ensure the provided scale corresponds to the uploaded asset. |
| ZERO_FACES | The intermediate .PLY file generated from the asset was determined to have no faces, making it invalid for conversion. | Ensure the asset is a valid mesh. | | INVALID_FACE_VERTICES | The intermediate .PLY file generated from the asset contained faces that referenced nonexistent vertices. | Ensure the asset file is validly constructed. |
-| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Refer to the asset guidelines before submitting an asset for conversion to ensure conformity: aka.ms/aoa/faq |
-| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate .PLY file exceeded service limits. | Refer to the asset size guidelines before submitting an asset for conversion to ensure conformity: aka.ms/aoa/faq |
+| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Refer to the [asset guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
+| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate .PLY file exceeded service limits. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
| SERVICE_ERROR | An unknown service error occurred. | Contact a member of the Object Anchors service team if the issue persists: https://github.com/Azure/azure-object-anchors/issues |
-| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise unable to be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type, and refer to the asset size guidelines before submitting an asset for conversion to ensure conformity: aka.ms/aoa/faq |
+| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise unable to be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type, and refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
Any errors that occur outside the actual asset conversion jobs will be thrown as exceptions. Most notably, the `Azure.RequestFailedException` can be thrown for service calls that receive an unsuccessful (4xx or 5xx) or unexpected HTTP response code. For further details on these exceptions, examine the `Status`, `ErrorCode`, or `Message` fields on the exception.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-single-to-flexible.md
+
+ Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"
+
+description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server.
++++ Last updated : 05/11/2022+++
+# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview)
+
+>[!NOTE]
+> Single Server to Flexible Server migration feature is in public preview.
+
+Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration feature enables customers to migrate their databases from Single server to Flexible. See this [documentation](./flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration feature. This feature automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The feature is provided free of cost for customers.
+
+Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
+
+## Overview
+
+Single to Flexible server migration feature provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
+
+You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration feature automates the following steps:
+
+1. Creates the migration infrastructure in the region of the target flexible server
+2. Creates public IP address and attaches it to the migration infrastructure
+3. Allow-listing of migration infrastructureΓÇÖs IP address on the firewall rules of both source and target servers
+4. Creates a migration project with both source and target types as Azure database for PostgreSQL
+5. Creates a migration activity to migrate the databases specified by the user from source to target.
+6. Migrates schema from source to target
+7. Creates databases with the same name on the target Flexible server
+8. Migrates data from source to target
+
+Following is the flow diagram for Single to Flexible migration feature.
+
+**Steps:**
+1. Create a Flex PG server
+2. Invoke migration
+3. Migration infrastructure provisioned (DMS)
+4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only)
+5. Cutover to the target
+
+The migration feature is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
+
+## Migration modes comparison
+
+Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases.
+
+The following table summarizes the differences between these two modes of migration.
+
+| Capability | Online | Offline |
+|:|:-|:--|
+| Database availability for reads during migration | Available | Available |
+| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated |
+| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
+| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime |
+| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. |
+| Manual Cutover | Required | Not required |
+| Downtime Required | Less | More |
+| Logical replication limitations | Applicable | Not Applicable |
+| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size |
+
+**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server.
+
+The following table shows the approximate time taken to perform offline migrations for databases of various sizes.
+
+>[!NOTE]
+> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases.
+
+| Database Size | Approximate Time Taken (HH:MM) |
+|:|:-|
+| 1 GB | 00:01 |
+| 5 GB | 00:05 |
+| 10 GB | 00:10 |
+| 50 GB | 00:45 |
+| 100 GB | 06:00 |
+| 500 GB | 08:00 |
+| 1000 GB | 09:30 |
+
+**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding).
+
+The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server.
+
+Based on the above differences, pick the mode that best works for your workloads.
+++
+## Migration steps
+
+### Pre-requisites
+
+Follow the steps provided in this section before you get started with the single to flexible server migration feature.
+
+- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration feature. Use the creation [QuickStart guide](./flexible-server/quickstart-create-server-portal.md) to create one.
+
+- **Source Server pre-requisites** - You must [enable logical replication](./concepts-logical.md) on the source server.
+
+ :::image type="content" source="./media/concepts-single-to-flex/logical-replication-support.png" alt-text="Logical replication from Azure portal" lightbox="./media/concepts-single-to-flex/logical-replication-support.png":::
+
+>[!NOTE]
+> Enabling logical replication will require a server reboot for the change to take effect.
+
+- **Azure Active Directory App set up** - It is a critical component of the migration feature. Azure AD App helps with role-based access control as the migration feature needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-aad-app-portal.md) for step-by-step process.
+
+### Data and schema migration
+
+Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI.
+
+- [Migrate using Azure portal](./how-to-migrate-single-to-flex-portal.md)
+- [Migrate using Azure CLI](./how-to-migrate-single-to-flex-cli.md)
+
+### Post migration
+
+- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you.
+
+- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation.
+
+- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases.
+
+- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration.
+
+- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes.
+
+- Post data validations and making your application point to flexible server, you can consider deleting your single server.
+
+## Limitations
+
+### Size limitations
+
+- Databases of sizes up to 1TB can be migrated using this feature. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
+
+- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.
+
+### Performance limitations
+
+- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance.
+
+- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline).
+
+### Replication limitations
+
+- Single to Flexible Server migration feature uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
+ - **DDL commands** are not replicated.
+ - **Sequence** data is not replicated.
+ - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
+
+ - Views, Materialized views, partition root tables and foreign tables will not be migrated.
+
+- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration.
+
+### Other limitations
+
+- The migration feature migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
+
+- It does not validate the data in flexible server post migration. The customers must manually do this.
+
+- The migration tool only migrates user databases including Postgres database and not system/maintenance databases.
+
+- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name can to be created.
+
+- The migration feature does not include assessment of your single server.
+
+## Best practices
+
+- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations.
+- Batch similar sized databases in a migration task.
+- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.
+- Perform test migrations before migrating for production.
+ - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility.
+
+ - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss.
+
+- Cut over all dependent applications to access the new primary database and open the applications for production usage.
+- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md).
+- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK West | :heavy_check_mark: | :x: | :x: |
+| West Central US | :heavy_check_mark: | :x: | :x: |
| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :x: | | West US 2 | :x: $$ | :x: $ | :x: |
$ New Zone-redundant high availability deployments are temporarily blocked in th
$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported.
-** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure Portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server.
+** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server.
<!-- We continue to add more regions for flexible server. --> > [!NOTE]
postgresql How To Migrate Single To Flex Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-migrate-single-to-flex-cli.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI
+
+>[!NOTE]
+> Single Server to Flexible Server migration feature is in public preview.
+
+This quick start article shows you how to use Single to Flexible Server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below.
+
+ :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I DMS" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms.png":::
+
+3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**.
+
+ :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I DMS register" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png":::
+
+## Pre-requisites
+
+### Setup Azure CLI
+
+1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli)
+2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli.md).
+3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli.md).
+
+ ```bash
+ az login
+ ```
+1. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration feature.
+
+## Migration CLI commands
+
+Single to Flexible Server migration feature comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
+
+```azurecli-interactive
+az postgres flexible-server migration --help
+```
+
+ gives you the following output.
+
+ :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-help.png":::
+
+It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail.
+
+### Create migration
+
+The create migration command helps in creating a migration from a source server to a target server
+
+```azurecli-interactive
+az postgres flexible-server migration create -- help
+```
+
+gives the following result
++
+It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below
+
+```azurecli
+az postgres flexible-server migration create [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--properties]
+```
+
+| Parameter | Description |
+| - | - |
+|**subscription** | Subscription ID of the target flexible server |
+| **resource-group** | Resource group of the target flexible server |
+| **name** | Name of the target flexible server |
+| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. |
+| **properties** | Absolute path to a JSON file, that has the information about the source single server |
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
+```
+
+The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
+
+The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
+
+Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
+
+The structure of the JSON is given below.
+
+```bash
+{
+"properties": {
+ "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
+
+"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
+"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
+
+"SecretParameters": {
+ "AdminCredentials":
+ {
+ "SourceServerPassword": "<password>",
+ "TargetServerPassword": "<password>"
+ },
+"AADApp":
+ {
+ "ClientId": "<client id>",
+ "TenantId": "<tenant id>",
+ "AadSecret": "<secret>"
+ }
+},
+
+"MigrationResourceGroup":
+ {
+ "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
+ "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
+ },
+
+"DBsToMigrate":
+ [
+ "<db1>","<db2>"
+ ],
+
+"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
+
+"OverwriteDBsInTarget":ΓÇ»"true"
+
+}
+
+}
+
+```
+
+Create migration parameters:
+
+| Parameter | Type | Description |
+| - | - | - |
+| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. |
+| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. |
+| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. |
+| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
+| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this feature are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
+| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. |
+| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
+| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration feature, permission to automatically overwrite databases by setting the value of this property to **true** |
+
+### Mode of migrations
+
+The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server.
+
+If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command.
+
+### List migrations
+
+The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--filter]
+```
+
+There is a parameter called **filter** and it can take **Active** and **All** as values.
+
+- **Active** ΓÇô Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/cancelled/succeeded state.
+- **All** ΓÇô Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state.
+
+```azurecli-interactive
+az postgres flexible-server migration list -- help
+```
+
+For any additional information.
+
+### Show Details
+
+The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below:
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command.
++
+Some important points to note on the command response:
+
+- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section.
+
+```azurecli-interactive
+ az postgres flexible-server migration show -- help
+ ```
+
+for any additional information.
+
+### Update migration
+
+As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states.
+
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--initiate-data-migration]
+```
+
+You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
+```
+
+In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action.
+
+- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--overwrite-dbs]
+```
+
+You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
+```
+
+- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--cutover]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
+```
+
+After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server.
+
+```azurecli-interactive
+ az postgres flexible-server migration update -- help
+ ```
+
+for any additional information.
+
+### Delete/Cancel Migration
+
+Any ongoing migration attempts can be deleted or cancelled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration
+
+```azurecli
+az postgres flexible-server migration delete [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
+```
+
+```azurecli-interactive
+ az postgres flexible-server migration delete -- help
+ ```
+
+for any additional information.
+
+## Monitoring Migration
+
+The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration.
+
+Migration **states**:
+
+| Migration State | Description |
+| - | - |
+| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. |
+| **Canceled** | The migration has been cancelled or deleted. |
+| **Failed** | The migration has failed. |
+| **Succeeded** | The migration has succeeded and is complete. |
+| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
+
+Migration **substates**:
+
+| Migration substates | Description |
+| - | - |
+| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. |
+| **MigratingData** | Data is being migrated. |
+| **CompletingMigration** | Migration cutover in progress. |
+| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. |
+| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. |
+| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into. <br> You can enable this via the **update migration** CLI command. |
+| **Completed** | Cutover was successful, and migration is complete. |
++
+## How to find if custom DNS is used for name resolution?
+Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server.
++
+## Post Migration Steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Migrate Single To Flex Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-migrate-single-to-flex-portal.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
+
+This guide shows you how to use Single to Flexible server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for the Azure Database Migration Service
+
+Go to Azure portal homepage and navigate to your subscription as shown below.
++
+In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**.
++
+## Pre-requisites
+
+Take care of the pre-requisites listed [here](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration feature.
+
+## Configure migration task
+
+Single to Flexible server migration feature comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
+
+- **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
+- Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](./flexible-server/quickstart-create-server-portal.md).
+
+- In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it.
++
+Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration feature, you will see an empty grid with a prompt to begin your first migration.
++
+If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers.
+
+Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server.
+
+### Setup tab
+
+The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same.
++
+- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.
+- The **Migration resource group** is where all the migration-related components will be created by this migration feature.
+
+By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
+
+- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field.
++
+Click on the **Next** button.
+
+### Source tab
++
+The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region.
+
+Choose the single server from which you want to migrate databases from, in the drop down.
+
+Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration feature to login into the single server to initiate the dump and migration.
+
+You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
+
+The final property in the source tab is migration mode. The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, please visit this [link](./concepts-single-to-flexible.md).
+
+Once you pick the migration mode, the restrictions associated with the mode are displayed.
+
+After filling out all the fields, please click the **Next** button.
+
+### Target tab
++
+This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration feature to login into the flexible server to perform restore operations.
+
+Choose an option **yes/no** for **Authorize DB overwrite**.
+
+- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present.
+- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration.
+
+Click on the **Next** button
+
+### Networking tab
+
+The content on the Networking tab depends on the networking topology of your source and target servers.
+
+- If both source and target servers are in public access, then you are going to see the message below.
++
+In this case, you need not do anything and can just click on the **Next** button.
+
+- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server:
+
+- **Single Server Private Access** ΓÇô **Deny public network access** set to **Yes** and a private end point configured
+- **Flexible Server Private Access** ΓÇô When flexible server is deployed inside a Vnet.
+
+If either source or target is configured in private access, then the networking tab looks like the following
++
+All the fields will be automatically populated with subnet details. This is the subnet in which the migration feature will deploy Azure DMS to move data between the source and target.
+
+You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
+
+After picking a subnet, click on **Next** button
+
+### Review + create tab
+
+This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration.
++
+## Monitoring migrations
+
+After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created.
++
+You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration
++
+The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid.
+
+You can use the refresh button to refresh the status of the migrations.
+
+You can click on the migration name in the grid to see the details of that migration.
++
+- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully.
++
+You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**.
++
+At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server.
+
+After completing the above steps, click on the **Cutover** button. You should see the following message
++
+Click on the **Yes** button to start cutover.
+
+In a few seconds after starting cutover, you should see the following notification
++
+Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful.
+
+After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server.
+
+Possible migration states include
+
+- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress.
+- **Canceled**: The migration has been cancelled or deleted.
+- **Failed**: The migration has failed.
+- **Succeeded**: The migration has succeeded and is complete.
+- **WaitingForUserAction**: Migration is waiting on a user action..
+
+Possible migration substates include
+
+- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration
+- **MigratingData**: Data is being migrated
+- **CompletingMigration**: Migration cutover in progress
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.
+- **WaitingForCutoverTrigger**: Migration is ready for cutover.
+- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into.
+- **Completed**: Cutover was successful, and migration is complete.
+
+## Cancel migrations
+
+You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**.
+
+You can choose multiple ongoing migrations at once and can cancel them.
++
+Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a cancelled migration on your target server.
+
+## Post migration steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Setup Aad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-setup-aad-app-portal.md
+
+ Title: "Setup Azure AD app to use with Single to Flexible migration"
+
+description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature.
++++ Last updated : 05/09/2022++
+# Setup Azure AD app to use with Single to Flexible server Migration
+
+This quick start article shows you how to setup Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps:
+
+## Create Azure AD App
+
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Search for Azure Active Directory in the search bar on the top in the portal.
+3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**.
+4. Click on **New Registration**
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/concepts-single-to-flex/azure-ad-new-registration.png":::
+
+5. Give the app registration a name, choose an option that suits your needs for account types and click register
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/concepts-single-to-flex/azure-ad-application-registration.png":::
+
+6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png":::
+
+7. In the next screen, click on **New client secret**.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png":::
+
+8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png":::
+
+9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/concepts-single-to-flex/azure-ad-client-secret-value.png":::
+
+10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources:
+
+ | Resource | Type | Description |
+ | - | - | - |
+ | Single Server | Required | Source single server you're migrating from. |
+ | Flexible Server | Required | Target flexible server you're migrating into. |
+ | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. |
+ | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. |
++
+## Add contributor privileges to an Azure resource
+
+Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used).
+
+1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/concepts-single-to-flex/azure-ad-iam-screen.png":::
+
+2. Click **Add** and choose **Add role assignment**.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png":::
+
+> [!NOTE]
+> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments.
+
+3. Under the **Role** tab, click on **Contributor** and click Next button
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png":::
+
+4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**.
+ :::image type="content" source="./media/concepts-single-to-flex/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/concepts-single-to-flex/azure-ad-review-and-assign.png":::
+
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
+- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md)
+- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md)
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
This Bicep file creates a private endpoint for an instance of Azure SQL Database
The Bicep file that this quickstart uses is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/private-endpoint-sql/). The Bicep file defines multiple Azure resources:
private-link Private Endpoint Static Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-static-ip-powershell.md
+
+ Title: Create a private endpoint with a static IP address - PowerShell
+
+description: Learn how to create a private endpoint for an Azure service with a static private IP address.
++++ Last updated : 05/13/2022+++
+# Create a private endpoint with a static IP address using PowerShell
+
+ A private endpoint IP address is allocated by DHCP in your virtual network by default. In this article, you'll create a private endpoint with a static IP address.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure web app with a **PremiumV2-tier** or higher app service plan, deployed in your Azure subscription.
+
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+
+ - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Create a resource group
+
+An Azure resource group is a logical container where Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name 'myResourceGroup' -Location 'eastus'
+```
+
+## Create a virtual network and bastion host
+
+A virtual network and subnet is required for to host the private IP address for the private endpoint. You'll create a bastion host to connect securely to the virtual machine to test the private endpoint. You'll create the virtual machine in a later section.
+
+In this section, you'll:
+
+- Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+
+- Create subnet configurations for the backend subnet and the bastion subnet with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig)
+
+- Create a public IP address for the bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
+
+- Create the bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)
+
+```azurepowershell-interactive
+## Configure the back-end subnet. ##
+$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
+
+## Create the Azure Bastion subnet. ##
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
+
+## Create the virtual network. ##
+$net = @{
+ Name = 'MyVNet'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnetConfig, $bastsubnetConfig
+}
+$vnet = New-AzVirtualNetwork @net
+
+## Create the public IP address for the bastion host. ##
+$ip = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ Zone = 1,2,3
+}
+$publicip = New-AzPublicIpAddress @ip
+
+## Create the bastion host. ##
+$bastion = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+}
+New-AzBastion @bastion -AsJob
+```
+
+## Create a private endpoint
+
+An Azure service that supports private endpoints is required to setup the private endpoint and connection to the virtual network. For the examples in this article, we are using an Azure WebApp from the prerequisites. For more information on the Azure services that support a private endpoint, see [Azure Private Link availability](availability.md).
+
+> [!IMPORTANT]
+> You must have a previously deployed Azure WebApp to proceed with the steps in this article. See [Prerequisites](#prerequisites) for more information.
+
+In this section, you'll:
+
+- Create a private link service connection with [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/new-azprivatelinkserviceconnection).
+
+- Create the private endpoint static IP configuration with [New-AzPrivateEndpointIpConfiguration](/powershell/module/az.network/new-azprivateendpointipconfiguration).
+
+- Create the private endpoint with [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint).
+
+```azurepowershell-interactive
+## Place the previously created webapp into a variable. ##
+$webapp = Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979
+
+## Create the private endpoint connection. ##
+$pec = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+}
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
+
+## Place the virtual network you created previously into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
+
+## Disable the private endpoint network policy. ##
+$vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
+$vnet | Set-AzVirtualNetwork
+
+## Create the static IP configuration. ##
+$ip = @{
+ Name = 'myIPconfig'
+ GroupId = 'sites'
+ MemberName = 'sites'
+ PrivateIPAddress = '10.0.0.10'
+}
+$ipconfig = New-AzPrivateEndpointIpConfiguration @ip
+
+## Create the private endpoint. ##
+$pe = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+ IpConfiguration = $ipconfig
+}
+New-AzPrivateEndpoint @pe
+
+```
+
+## Configure the private DNS zone
+
+A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we are using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md)].
+
+In this section, you'll:
+
+- Create a new private Azure DNS zone with [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
+
+- Link the DNS zone to the virtual network you created previously with [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
+
+- Create a DNS zone configuration with [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
+
+- Create a DNS zone group with [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
+
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
+
+## Create the private DNS zone. ##
+$zn = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'privatelink.azurewebsites.net'
+}
+$zone = New-AzPrivateDnsZone @zn
+
+## Create a DNS network link. ##
+$lk = @{
+ ResourceGroupName = 'myResourceGroup'
+ ZoneName = 'privatelink.azurewebsites.net'
+ Name = 'myLink'
+ VirtualNetworkId = $vnet.Id
+}
+$link = New-AzPrivateDnsVirtualNetworkLink @lk
+
+## Configure the DNS zone. ##
+$cg = @{
+ Name = 'privatelink.azurewebsites.net'
+ PrivateDnsZoneId = $zone.ResourceId
+}
+$config = New-AzPrivateDnsZoneConfig @cg
+
+## Create the DNS zone group. ##
+$zg = @{
+ ResourceGroupName = 'myResourceGroup'
+ PrivateEndpointName = 'myPrivateEndpoint'
+ Name = 'myZoneGroup'
+ PrivateDnsZoneConfig = $config
+}
+New-AzPrivateDnsZoneGroup @zg
+
+```
+
+## Create a test virtual machine
+
+To verify the static IP address and the functionality of the private endpoint, a test virtual machine connected to your virtual network is required.
+
+In this section, you'll:
+
+- Create a login credential for the virtual machine with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
+
+- Create a network interface for the virtual machine with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+
+- Create a virtual machine configuration with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig), [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem), [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage), and [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+- Create the virtual machine with [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+```azurepowershell-interactive
+## Create the credential for the virtual machine. Enter a username and password at the prompt. ##
+$cred = Get-Credential
+
+## Place the virtual network into a variable. ##
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+
+## Create a network interface for the virtual machine. ##
+$nic = @{
+ Name = 'myNicVM'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+}
+$nicVM = New-AzNetworkInterface @nic
+
+## Create the configuration for the virtual machine. ##
+$vm1 = @{
+ VMName = 'myVM'
+ VMSize = 'Standard_DS1_v2'
+}
+$vm2 = @{
+ ComputerName = 'myVM'
+ Credential = $cred
+}
+$vm3 = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig =
+New-AzVMConfig @vm1 | Set-AzVMOperatingSystem -Windows @vm2 | Set-AzVMSourceImage @vm3 | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine. ##
+New-AzVM -ResourceGroupName 'myResourceGroup' -Location 'eastus' -VM $vmConfig
+
+```
++
+## Test connectivity with the private endpoint
+
+Use the VM you created in the previous step to connect to the webapp across the private endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
+
+3. Select **myVM**.
+
+4. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
+
+5. Enter the username and password that you used when you created the VM. Select **Connect**.
+
+6. After you've connected, open PowerShell on the server.
+
+7. Enter `nslookup mywebapp1979.azurewebsites.net`. Replace **mywebapp1979** with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
+
+ ```powershell
+ Server: UnKnown
+ Address: 168.63.129.16
+
+ Non-authoritative answer:
+ Name: mywebapp1979.privatelink.azurewebsites.net
+ Address: 10.0.0.10
+ Aliases: mywebapp1979.azurewebsites.net
+ ```
+
+ A static private IP address of *10.0.0.10* is returned for the web app name.
+
+8. In the bastion connection to **myVM**, open the web browser.
+
+9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
+
+ If your web app hasn't been deployed, you'll get the following default web app page:
+
+ :::image type="content" source="./media/private-endpoint-static-ip-powershell/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
+
+10. Close the connection to **myVM**.
+
+## Next steps
+
+To learn more about Private Link and Private endpoints, see
+
+- [What is Azure Private Link](private-link-overview.md)
+
+- [Private endpoint overview](private-endpoint-overview.md)
+++
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Last updated 12/06/2021
# What is Microsoft Purview?
-Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Enable data curators to manage and secure your data estate. Empower data consumers to find valuable, trustworthy data.
+Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Microsoft Purview allows you to:
+- Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage.
+- Enable data curators to manage and secure your data estate.
+- Empower data consumers to find valuable, trustworthy data.
:::image type="content" source="./media/overview/high-level-overview.png" alt-text="High-level architecture of Microsoft Purview, showing multi-cloud and on premises sources flowing into Microsoft Purview, and Microsoft Purview's apps (Data Catalog, Map, and Data Estate Insights) allowing data consumers and data curators to view and manage metadata. This metadata is also being ported to external analytics services from Microsoft Purview for more processing." lightbox="./media/overview/high-level-overview-large.png":::
Microsoft Purview automates data discovery by providing data scanning and classi
## Data Map
-Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
+Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview Data Estate Insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). For more information, see our [introduction to Data Map](concept-elastic-data-map.md). ## Data Catalog
-With the Microsoft Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI.
+With the Microsoft Purview Data Catalog, business and technical users can quickly and easily find relevant data using a search experience with filters based on lenses such as glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features such as business glossary management and the ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets: for example, starting from operational systems on-premises, through movement, transformation & enrichment with various data storage and processing systems in the cloud, to consumption in an analytics system like Power BI.
For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md). ## Data Estate Insights
Traditionally, discovering enterprise data sources has been an organic process b
* Because there's no central location to register data sources, users might be unaware of a data source unless they come into contact with it as part of another process. * Unless users know the location of a data source, they can't connect to the data by using a client application. Data-consumption experiences require users to know the connection string or path. * The intended use of the data is hidden to users unless they know the location of a data source's documentation. Data sources and documentation might live in several places and be consumed through different kinds of experiences.
-* If users have questions about an information asset, they must locate the expert or team that's responsible for the data and engage them offline. There's no explicit connection between data and the experts that have perspectives on its use.
+* If users have questions about an information asset, they must locate the expert or team responsible for that data and engage them offline. There's no explicit connection between the data and the experts that understand the data's context.
* Unless users understand the process for requesting access to the data source, discovering the data source and its documentation won't help them access the data. ## Discovery challenges for data producers
When such challenges are combined, they present a significant barrier for compan
Users who are responsible for ensuring the security of their organization's data may have any of the challenges listed above as data consumers and producers, and the following extra challenges:
-* An organization's data is constantly growing, stored, and shared in new directions. The task of discovering, protecting, and governing your sensitive data is one that never ends. You want to make sure that your organization's content is being shared with the correct people, applications, and with the correct permissions.
-* Understanding the risk levels in your organization's data requires diving deep into your content, looking for keywords, RegEx patterns, and sensitive data types. Sensitive data types can include Credit Card numbers, Social Security numbers, or Bank Account numbers, to name a few. You constantly monitor all data sources for sensitive content, as even the smallest amount of data loss can be critical to your organization.
-* Ensuring that your organization continues to comply with corporate security policies is a challenging task as your content grows and changes, and as those requirements and policies are updated for changing digital realities. Security administrators are often tasked with ensuring data security in the quickest time possible.
+* An organization's data is constantly growing and being stored and shared in new directions. The task of discovering, protecting, and governing your sensitive data is one that never ends. You need to ensure that your organization's content is being shared with the correct people, applications, and with the correct permissions.
+* Understanding the risk levels in your organization's data requires diving deep into your content, looking for keywords, RegEx patterns, and sensitive data types. For example, sensitive data types might include Credit Card numbers, Social Security numbers or Bank Account numbers. You must constantly monitor all data sources for sensitive content, as even the smallest amount of data loss can be critical to your organization.
+* Ensuring that your organization continues to comply with corporate security policies is a challenging task as your content grows and changes, and as those requirements and policies are updated for changing digital realities. Security administrators need to ensure data security in the quickest time possible.
## Microsoft Purview advantages
Microsoft Purview is designed to address the issues mentioned in the previous se
Microsoft Purview provides a cloud-based service into which you can register data sources. During registration, the data remains in its existing location, but a copy of its metadata is added to Microsoft Purview, along with a reference to the data source location. The metadata is also indexed to make each data source easily discoverable via search and understandable to the users who discover it.
-After you register a data source, you can then enrich its metadata. Either the user who registered the data source or another user in the enterprise adds the metadata. Any user can annotate a data source by providing descriptions, tags, or other metadata for requesting data source access. This descriptive metadata supplements the structural metadata, such as column names and data types, that's registered from the data source.
+After you register a data source, you can then enrich its metadata. Either the user who registered the data source or another user in the enterprise can add additional metadata. Any user can annotate a data source by providing descriptions, tags, or other metadata for requesting data source access. This descriptive metadata supplements the structural metadata, such as column names and data types that are registered from the data source.
-Discovering and understanding data sources and their use is the primary purpose of registering the sources. Enterprise users might need data for business intelligence, application development, data science, or any other task where the right data is required. They use the data catalog discovery experience to quickly find data that matches their needs, understand the data to evaluate its fitness for the purpose, and consume the data by opening the data source in their tool of choice.
+Discovering and understanding data sources and their use is the primary purpose of registering the sources. Enterprise users might need data for business intelligence, application development, data science, or any other task where the correct data is required. They can use the data catalog discovery experience to quickly find data that matches their needs, understand the data to evaluate its fitness for purpose, and consume the data by opening the data source in their tool of choice.
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users.
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Scans can be managed or run again on completion
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-full-inc.png" alt-text="full or incremental scan.":::
-## Lineage(Preview)
+## Lineage (Preview)
+<a id="lineagepreview"></a>
Microsoft Purview supports lineage from Azure SQL Database. At the time of setting up a scan, enable lineage extraction toggle button to extract lineage. ### Prerequisites for setting up scan with Lineage extraction
-1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL DataBase
+1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL Database
2. Sign in to Azure SQL Database with Azure AD account and assign proper permission (for example: db_owner) to Purview Managed identity. Use below example SQL syntax to create user and grant permission by replacing 'purview-account' with your Account name:
Microsoft Purview supports lineage from Azure SQL Database. At the time of setti
### Search Azure SQL Database assets and view runtime lineage
-You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. Below steps describe how-to view runtime lineage details
+You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. The following steps describe how-to view runtime lineage details.
-1. Go to asset -> lineage tab, you can see the asset lineage when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure SQL Database lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md)
+1. Go to asset -> lineage tab, you can see the asset lineage when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure SQL Database lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage.png" alt-text="Screenshot that shows the screen with lineage from stored procedures.":::
-2. Go to stored procedure asset -> Properties -> Related assets to see the latest run details of stored procedures
+2. Go to stored procedure asset -> Properties -> Related assets to see the latest run details of stored procedures.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-properties.png" alt-text="Screenshot that shows the screen with stored procedure properties containing runs.":::
-3. Select the stored procedure hyperlink next to Runs to see Azure SQL Stored Procedure Run Overview. Go to properties tab to see enhanced run time information from stored procedure. For example: executedTime, rowcount, Client Connection, and so on
+3. Select the stored procedure hyperlink next to Runs to see Azure SQL Stored Procedure Run Overview. Go to properties tab to see enhanced run time information from stored procedure. For example: executedTime, rowcount, Client Connection, and so on.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties.png" alt-text="Screenshot that shows the screen with stored procedure run properties."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties-expanded.png":::
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-rest-apis.md
Last updated 09/17/2021
# Tutorial: Use the REST APIs
-In this tutorial, you learn how to use the Microsoft Purview REST APIs. Anyone who wants to submit data to a Microsoft Purview, include Microsoft Purview as part of an automated process, or build their own user experience on the Microsoft Purview can use the REST APIs to do so.
+In this tutorial, you learn how to use the Microsoft Purview REST APIs. Anyone who wants to submit data to Microsoft Purview, include Microsoft Purview as part of an automated process, or build their own user experience on Microsoft Purview can use the REST APIs to do so.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
To create a new service principal:
The application ID is the `client_id` value in the sample code.
-To use the service principal (application), you need to get
-its password. Here's how:
+To use the service principal (application), you need to know the service principal's password which can be found by:
1. From the Azure portal, search for and select **Azure Active Directory**, and then select **App registrations** from the left pane. 1. Select your service principal (application) from the list.
its password. Here's how:
## Set up authentication using service principal
-Once service principal is created, you need to assign Data plane roles of your purview account to the service principal created above. The below steps need to be followed to assign role to establish trust between the service principal and purview account.
+Once the new service principal is created, you need to assign the data plane roles of your purview account to the service principal created above. Follow the steps below to assign the correct role to establish trust between the service principal and the Purview account:
1. Navigate to your [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). 1. Select the Data Map in the left menu.
remote-rendering Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/resources/troubleshoot.md
Make sure that your firewalls (on device, inside routers, etc.) don't block the
## Failed to load model
-When loading a model (e.g. via a Unity sample) fails although the blob configuration is correct, it is likely that the blob storage is not properly linked. This is explained in the [linking of a storage account](../how-tos/create-an-account.md#link-storage-accounts) chapter. Note that after correct linking it can take up to 30 minutes until the changes take effect.
+When loading a model (for example, via a Unity sample) fails although the blob configuration is correct, it's likely that the blob storage isn't properly linked. This is explained in the [linking of a storage account](../how-tos/create-an-account.md#link-storage-accounts) chapter. Note that after correct linking it can take up to 30 minutes until the changes take effect.
## Can't link storage account to ARR account
Sometimes during [linking of a storage account](../how-tos/create-an-account.md#
Check that your GPU supports hardware video decoding. See [Development PC](../overview/system-requirements.md#development-pc).
-If you are working on a laptop with two GPUs, it is possible that the GPU you are running on by default, does not provide hardware video decoding functionality. If so, try to force your app to use the other GPU. This is often possible in the GPU driver settings.
+If you're working on a laptop with two GPUs, it's possible that the GPU you're running on by default, doesn't provide hardware video decoding functionality. If so, try to force your app to use the other GPU. This is often possible in the GPU driver settings.
## Retrieve session/conversion status fails Sending REST API commands too frequently will cause the server to throttle and return failure eventually. The HTTP status code in the throttling case is 429 ("too many requests"). As a rule of thumb, there should be a delay of **5-10 seconds between subsequent calls**.
-Note this limit not only affects the REST API calls when called directly but also their C#/C++ counterparts, such as `Session.GetPropertiesAsync`, `Session.RenewAsync`, or `Frontend.GetAssetConversionStatusAsync`. Some functions also return information when it is save to retry. For example `RenderingSessionPropertiesResult.MinimumRetryDelay` specifies how many seconds to wait before attempting another check. When available, using such a returned value is best, as it allows you to do checks as often as possible, without getting throttled.
+Note this limit not only affects the REST API calls when called directly but also their C#/C++ counterparts, such as `Session.GetPropertiesAsync`, `Session.RenewAsync`, or `Frontend.GetAssetConversionStatusAsync`. Some functions also return information when it's save to retry. For example, `RenderingSessionPropertiesResult.MinimumRetryDelay` specifies how many seconds to wait before attempting another check. When available, using such a returned value is best, as it allows you to do checks as often as possible, without getting throttled.
-If you experience server-side throttling, change the code to do the calls less frequently. The server will reset the throttling state every minute, so it is safe to rerun the code after a minute.
+If you experience server-side throttling, change the code to do the calls less frequently. The server will reset the throttling state every minute, so it's safe to rerun the code after a minute.
## H265 codec not available
The reason for this issue is an incorrect security setting on the DLLs. This pro
1. Open that folder in Windows Explorer 1. There should be an **x86** and an **x64** subfolder. Right-click on one of the folders and choose **Properties**
- 1. Select the **Security** tab and click the **Advanced** settings button
- 1. Click **Change** for the **Owner**
+ 1. Select the **Security** tab and select the **Advanced** settings button
+ 1. Select **Change** for the **Owner**
1. Type **Administrators** into the text field
- 1. Click **Check Names** and **OK**
+ 1. Select **Check Names** and **OK**
1. Repeat the steps above for the other folder 1. Also repeat the steps above on each DLL file inside both folders. There should be four DLLs altogether.
The video quality can be compromised either by network quality or the missing H2
* See the steps to [identify network problems](#unstable-holograms). * See the [system requirements](../overview/system-requirements.md#development-pc) for installing the latest graphics driver.
-## Video recorded with MRC does not reflect the quality of the live experience
+## Video recorded with MRC doesn't reflect the quality of the live experience
A video can be recorded on HoloLens through [Mixed Reality Capture (MRC)](/windows/mixed-reality/mixed-reality-capture-for-developers). However the resulting video has worse quality than the live experience for two reasons: * The video framerate is capped at 30 Hz as opposed to 60 Hz.
-* The video images do not go through the [late stage reprojection](../overview/features/late-stage-reprojection.md) processing step, so the video appears to be choppier.
+* The video images don't go through the [late stage reprojection](../overview/features/late-stage-reprojection.md) processing step, so the video appears to be choppier.
Both are inherent limitations of the recording technique. ## Black screen after successful model loading
-If you are connected to the rendering runtime and loaded a model successfully, but only see a black screen afterwards, then this can have a few distinct causes.
+If you're connected to the rendering runtime and loaded a model successfully, but only see a black screen afterwards, then this can have a few distinct causes.
We recommend testing the following things before doing a more in-depth analysis:
-* Is the H265 codec installed? Although there should be a fallback to the H264 codec, we have seen cases where this fallback did not work properly. See the [system requirements](../overview/system-requirements.md#development-pc) for installing the latest graphics driver.
+* Is the H265 codec installed? Although there should be a fallback to the H264 codec, we have seen cases where this fallback didn't work properly. See the [system requirements](../overview/system-requirements.md#development-pc) for installing the latest graphics driver.
* When using a Unity project, close Unity, delete the temporary *library* and *obj* folders in the project directory and load/build the project again. In some cases cached data caused the sample to not function properly for no obvious reason.
-If these two steps did not help, it is required to find out whether video frames are received by the client or not. This can be queried programmatically as explained in the [server-side performance queries](../overview/features/performance-queries.md) chapter. The `FrameStatistics struct` has a member that indicates how many video frames have been received. If this number is larger than 0 and increasing over time, the client receives actual video frames from the server. Consequently it must be a problem on the client side.
+If these two steps didn't help, it's required to find out whether video frames are received by the client or not. This can be queried programmatically as explained in the [server-side performance queries](../overview/features/performance-queries.md) chapter. The `FrameStatistics struct` has a member that indicates how many video frames have been received. If this number is larger than 0 and increasing over time, the client receives actual video frames from the server. Consequently, it must be a problem on the client side.
### Common client-side issues
See specific [server size limits](../reference/limits.md#overall-number-of-polyg
**The model is not inside the camera frustum:**
-In many cases, the model is displayed correctly but located outside the camera frustum. A common reason is that the model has been exported with a far off-center pivot so it is clipped by the camera's far clipping plane. It helps to query the model's bounding box programmatically and visualize the box with Unity as a line box or print its values to the debug log.
+In many cases, the model is displayed correctly but located outside the camera frustum. A common reason is that the model has been exported with a far off-center pivot so it's clipped by the camera's far clipping plane. It helps to query the model's bounding box programmatically and visualize the box with Unity as a line box or print its values to the debug log.
-Furthermore the conversion process generates an [output json file](../how-tos/conversion/get-information.md) alongside with the converted model. To debug model positioning issues, it is worth looking at the `boundingBox` entry in the [outputStatistics section](../how-tos/conversion/get-information.md#the-outputstatistics-section):
+Furthermore the conversion process generates an [output json file](../how-tos/conversion/get-information.md) alongside with the converted model. To debug model positioning issues, it's worth looking at the `boundingBox` entry in the [outputStatistics section](../how-tos/conversion/get-information.md#the-outputstatistics-section):
```JSON {
Furthermore the conversion process generates an [output json file](../how-tos/co
} ```
-The bounding box is described as a `min` and `max` position in 3D space, in meters. So a coordinate of 1000.0 means it is 1 kilometer away from the origin.
+The bounding box is described as a `min` and `max` position in 3D space, in meters. So a coordinate of 1000.0 means it's 1 kilometer away from the origin.
There can be two problems with this bounding box that lead to invisible geometry: * **The box can be far off-center**, so the object is clipped altogether due to far plane clipping. The `boundingBox` values in this case would look like this: `min = [-2000, -5,-5], max = [-1990, 5,5]`, using a large offset on the x-axis as an example here. To resolve this type of issue, enable the `recenterToOrigin` option in the [model conversion configuration](../how-tos/conversion/configure-model-conversion.md).
Azure Remote Rendering hooks into the Unity render pipeline to do the frame comp
## Checkerboard pattern is rendered after model loading If the rendered image looks like this:
-![Screenshot shows a grid of black and white squares with a Tools menu.](../reference/media/checkerboard.png)
+![Screenshot shows a grid of black and white squares with a Tools menu.](../reference/media/checkerboard.png)
+ then the renderer hits the [polygon limits for the standard configuration size](../reference/vm-sizes.md). To mitigate, either switch to **premium** configuration size or reduce the number of visible polygons. ## The rendered image in Unity is upside-down
Make sure to follow the [Unity Tutorial: View remote models](../tutorials/unity/
Reasons for this issue could be MSAA, HDR, or enabling post processing. Make sure that the low-quality profile is selected and set as default in the Unity. To do so go to *Edit > Project Settings... > Quality*.
-When using the OpenXR plugin in Unity 2020, there are versions of the URP (Universal Render Pipeline) that create this extra off-screen render target regardless of post processing being enabled. It is thus important to upgrade the URP version manually to at least 10.5.1 (or higher). This is described in the [system requirements](../overview/system-requirements.md#unity-2020).
+When using the OpenXR plugin in Unity 2020, there are versions of the URP (Universal Render Pipeline) that create this extra off-screen render target regardless of post processing being enabled. It's thus important to upgrade the URP version manually to at least 10.5.1 (or higher). This is described in the [system requirements](../overview/system-requirements.md#unity-2020).
## Unity code using the Remote Rendering API doesn't compile
Switch the *build type* of the Unity solution to **Debug**. When testing ARR in
### Compile failures when compiling Unity samples for HoloLens 2
-We have seen spurious failures when trying to compile Unity samples (quickstart, ShowCaseApp, ..) for HoloLens 2. Visual Studio complains about not being able to copy some files albeit they are there. If you hit this problem:
+We have seen spurious failures when trying to compile Unity samples (quickstart, ShowCaseApp,.. ) for HoloLens 2. Visual Studio complains about not being able to copy some files albeit they're there. If you hit this problem:
* Remove all temporary Unity files from the project and try again. That is, close Unity, delete the temporary *library* and *obj* folders in the project directory and load/build the project again. * Make sure the projects are located in a directory on disk with reasonably short path, since the copy step sometimes seems to run into problems with long filenames.
-* If that does not help, it could be that MS Sense interferes with the copy step. To set up an exception, run this registry command from command line (requires admin rights):
+* If that doesn't help, it could be that MS Sense interferes with the copy step. To set up an exception, run this registry command from command line (requires admin rights):
```cmd reg.exe ADD "HKLM\SOFTWARE\Policies\Microsoft\Windows Advanced Threat Protection" /v groupIds /t REG_SZ /d "Unity" ```
We have seen spurious failures when trying to compile Unity samples (quickstart,
The `AudioPluginMsHRTF.dll` for Arm64 was added to the *Windows Mixed Reality* package *(com.unity.xr.windowsmr.metro)* in version 3.0.1. Ensure that you have version 3.0.1 or later installed via the Unity Package Manager. From the Unity menu bar, navigate to *Window > Package Manager* and look for the *Windows Mixed Reality* package.
-## Native C++ based application does not compile
+## The Unity `Cinemachine` plugin does not work in Remote pose mode
+
+In [Remote pose mode](../overview/features/late-stage-reprojection.md#reprojection-pose-modes), the ARR Unity binding code implicitly creates a proxy camera that performs the actual rendering. In this case, the main camera's culling mask is set to 0 ("nothing") to effectively turn off the rendering for it. However, some third party plugins (like `Cinemachine`) that drive the camera, may rely on at least some layer bits being set.
+
+For this purpose, The binding code allows you to programmatically change the layer bitmask for the main camera. Specifically, the following steps are required:
+
+1. Create a new layer in Unity that isn't used for rendering any local scene geometry. In this example, assume the layer is named "Cam".
+1. Pass this bitmask to ARR so ARR sets it on the main camera:
+ ```cs
+ RemoteManagerUnity.CameraCullingMask = LayerMask.GetMask("Cam");
+ ```
+1. Configure the `Cinemachine` properties to use this new layer:
+![Screenshot that shows Unity's inspector panel for camera settings in `Cinemachine`.](./media/cinemachine-camera-config.png)
++
+The local pose mode isn't affected by this, since in this case the ARR binding doesn't redirect rendering to an internal proxy camera.
+
+## Native C++ based application doesn't compile
### 'Library not found' error for UWP application or DLL
-Inside the C++ NuGet package, there is file `microsoft.azure.remoterendering.Cpp.targets` file that defines which of the binary flavor to use. To identify `UWP`, the conditions in the file check for `ApplicationType == 'Windows Store'`. So it needs to be ensured that this type is set in the project. That should be the case when creating a UWP application or DLL through Visual Studio's project wizard.
+Inside the C++ NuGet package, there's file `microsoft.azure.remoterendering.Cpp.targets` file that defines which of the binary flavor to use. To identify `UWP`, the conditions in the file check for `ApplicationType == 'Windows Store'`. So it needs to be ensured that this type is set in the project. That should be the case when creating a UWP application or DLL through Visual Studio's project wizard.
## Unstable Holograms
In case rendered objects seem to be moving along with head movements, you might
Another reason for unstable holograms (wobbling, warping, jittering, or jumping holograms) can be poor network connectivity, in particular insufficient network bandwidth, or too high latency. A good indicator for the quality of your network connection is the [performance statistics](../overview/features/performance-queries.md) value `ServiceStatistics.VideoFramesReused`. Reused frames indicate situations where an old video frame needed to be reused on the client side because no new video frame was available ΓÇô for example because of packet loss or because of variations in network latency. If `ServiceStatistics.VideoFramesReused` is frequently larger than zero, this indicates a network problem.
-Another value to look at is `ServiceStatistics.LatencyPoseToReceiveAvg`. It should consistently be below 100 ms. Seeing higher values could indicate that you are connected to a data center that is too far away.
+Another value to look at is `ServiceStatistics.LatencyPoseToReceiveAvg`. It should consistently be below 100 ms. Seeing higher values could indicate that you're connected to a data center that is too far away.
For a list of potential mitigations, see the [guidelines for network connectivity](../reference/network-requirements.md#guidelines-for-network-connectivity).
Compare these examples with your z-fighting to determine the cause or optionally
1. If the z-fighting is visible most of the time, the surfaces are nearly coplanar. 1. If the z-fighting is only visible from far away, the cause is lack of depth precision.
-Coplanar surfaces can have a number of different causes:
+Coplanar surfaces can have many different causes:
* An object was duplicated by the exporting application because of an error or different workflow approaches.
Coplanar surfaces can have a number of different causes:
## Graphics artifacts using multi-pass stereo rendering in native C++ apps
-In some cases, custom native C++ apps that use a multi-pass stereo rendering mode for local content (rendering to the left and right eye in separate passes) after calling [**BlitRemoteFrame**](../concepts/graphics-bindings.md#render-remote-image-openxr) can trigger a driver bug. The bug results in non-deterministic rasterization glitches, causing individual triangles or parts of triangles of the local content to randomly disappear. For performance reasons, it is recommended anyway to render local content with a more modern single-pass stereo rendering technique, for example using **SV_RenderTargetArrayIndex**.
+In some cases, custom native C++ apps that use a multi-pass stereo rendering mode for local content (rendering to the left and right eye in separate passes) after calling [**BlitRemoteFrame**](../concepts/graphics-bindings.md#render-remote-image-openxr) can trigger a driver bug. The bug results in non-deterministic rasterization glitches, causing individual triangles or parts of triangles of the local content to randomly disappear. For performance reasons, it's recommended anyway to render local content with a more modern single-pass stereo rendering technique, for example using **SV_RenderTargetArrayIndex**.
## Conversion File Download Errors The Conversion service may encounter errors downloading files from blob storage because of file system limitations. Specific failure cases are listed below. Comprehensive information on Windows file system limitations can be found in the [Naming Files, Paths, and Namespaces](/windows/win32/fileio/naming-a-file) documentation. ### Colliding path and file name
-In blob storage it is possible to create a file and a folder of the exact same name as sibling entries. In Windows file system this is not possible. Accordingly, the service will emit a download error in that case.
+In blob storage, it's possible to create a file and a folder of the exact same name as sibling entries. In Windows file system this isn't possible. Accordingly, the service will emit a download error in that case.
### Path length
-There are path length limits imposed by Windows and the service. File paths and file names in your blob storage must not exceed 178 characters. For example given a `blobPrefix` of `models/Assets` which is 13 characters:
+There are path length limits imposed by Windows and the service. File paths and file names in your blob storage must not exceed 178 characters. For example given a `blobPrefix` of `models/Assets`, which is 13 characters:
`models/Assets/<any file or folder path greater than 164 characters will fail the conversion>`
search Knowledge Store Create Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-rest.md
-+ Last updated 05/11/2022 # Create a knowledge store using REST and Postman Knowledge store is a feature of Azure Cognitive Search that sends skillset output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) to Azure Storage for subsequent knowledge mining, data analysis, or downstream processing. After the knowledge store is populated, you can use tools like [Storage Browser](knowledge-store-view-storage-explorer.md) or [Power BI](knowledge-store-connect-power-bi.md) to explore the content.
-In this article, you'll use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store in Azure Storage. The end result is a knowledge store that contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
+In this article, you'll learn how to use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store in Azure Storage. The end result is a knowledge store that contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
To make the initial data set available, the hotel reviews are first imported into Azure Blob Storage. Post-processing, the results are saved as a knowledge store in Azure Table Storage. > [!NOTE]
-> This articles assumes the [Postman desktop app](https://www.getpostman.com/) for this article. The [source code](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/knowledge-store) for this article includes a Postman collection containing all of the requests.
+> The [source code](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/knowledge-store) for this article includes a Postman collection containing all of the requests. If you don't want to use Postman, you can [create the same knowledge store in the Azure portal](knowledge-store-create-portal.md) using the Import data wizard.
-## Create services and load data
+## Prerequisites
-This exercise uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI.
++ [Postman desktop app](https://www.getpostman.com/)
-Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-1. [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D). This data is hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
++ Azure Cognitive Search. [Create a service](search-create-service-portal.md) or [find an existing one](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use the free service for this exercise.+++ Azure Storage. [Create an account](../storage/common/storage-account-create.md) or [find an existing one](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/). The account type must be **StorageV2 (general purpose V2)**.+++ Sample data loaded into Blob Storage (instructions provided in the next section).
-1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/). You'll use Azure Storage for both the raw content to be imported, and the knowledge store that is the end result.
+## Load data
- Choose the **StorageV2 (general purpose V2)** account type.
+This uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
+
+1. [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D). This data is hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
1. In the Azure Storage resource, use **Storage Browser** to create a blob container named **hotel-reviews**.
Because the workload is so small, Cognitive Services is tapped behind the scenes
During skillset execution, the indexer connects to Azure Storage and creates the knowledge store. The connection information will be specified in the "knowledgeStore" section of the skillset. You can choose from the following approaches when setting up your connection:
-+ Option 1, obtain a full access Azure Storage connection string that includes an access key:
++ Option 1: Obtain a full access Azure Storage connection string that includes an access key: In the Azure Storage portal page, select **Access Keys** on the left navigation pane.
During skillset execution, the indexer connects to Azure Storage and creates the
} ```
-+ Option 2, use your search service's system managed identity or user-assigned managed identity to connect to Azure Storage. Follow the instructions and examples in [Connect using a managed identity](search-howto-managed-identities-data-sources.md). You'll need to set up the managed identity, assign roles, and assemble a connection string.
++ Option 2: Use your search service's system managed identity or user-assigned managed identity to connect to Azure Storage. Follow the instructions and examples in [Connect using a managed identity](search-howto-managed-identities-data-sources.md). You'll need to set up the managed identity, assign roles, and assemble a connection string. A connection string for a system managed identity has the following format:
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
+
+ Title: Azure Certificate Authority details
+description: Create, deploy, and manage a cloud-native Public Key Infrastructure with Azure PKI.
++++ Last updated : 04/28/2022+++++++
+# Azure Certificate Authority details
+
+This article provides the details of the root and subordinate Certificate Authorities (CAs) utilized by Azure. The minimum requirements for public key encryption and signature algorithms as well as links to certificate downloads and revocation lists are provided below the CA details tables.
+
+Looking for CA details specific to Azure Active Directory? See the [Certificate authorities used by Azure Active Directory](../../active-directory/fundamentals/certificate-authorities.md) article.
+
+**How to read the certificate details:**
+- The Serial Number (top string in the table) contains the hexadecimal value of the certificate serial number.
+- The Thumbprint (bottom string in the table) is the SHA-1 thumbprint.
+- Links to download the Privacy Enhanced Mail (PEM) and Distinguished Encoding Rules (DER) are the last cell in the table.
+
+## Root Certificate Authorities
+
+| Certificate Authority | Expiry Date | Serial Number /<br>Thumbprint | Downloads |
+|- |- |- |- |
+| DigiCert Global Root CA | Nov 10, 2031 | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootca2031-11-10der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootca2031-11-10pem.crt) |
+| DigiCert Global Root G2 | Jan 15 2038 | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/digicert/digicertglobalrootg22038-01-15der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/digicert/digicertglobalrootg22038-01-15pem.crt) |
+| DigiCert Global Root G3 | Jan 15, 2038 | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootg32038-01-15der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertglobalrootg32038-01-15pem.crt) |
+| Baltimore CyberTrust Root | May 12, 2025 | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/baltimorecybertrustroot2025-05-12der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/baltimorecybertrustroot2025-05-12pem.crt) |
+| Microsoft ECC Root Certificate Authority 2017 | Jul 18, 2042 | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsofteccrootcertificateauthority20172042-07-18der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsofteccrootcertificateauthority20172042-07-18pem.crt) |
+| Microsoft RSA Root Certificate Authority 2017 | Jul 18, 2042 | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftrsarootcertificateauthority20172042-07-18der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftrsarootcertificateauthority20172042-07-18pem.crt) |
+
+## Subordinate Certificate Authorities
+
+| Certificate Authority | Expiry Date | Serial Number<br>Thumbprint | Downloads |
+|- |- |- |- |
+| DigiCert SHA2 Secure Server CA | Sep 22, 2030 | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertsha2secureserverca2030-09-22der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertsha2secureserverca2030-09-22pem.crt) |
+| DigiCert TLS Hybrid ECC SHA384 2020 CA1 | Sep 22, 2030 | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlshybrideccsha3842020ca12030-09-22der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlshybrideccsha3842020ca12030-09-22pem.crt) |
+| DigiCert Cloud Services CA-1 | Aug 4, 2030 | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertcloudservicesca-12030-08-04der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertcloudservicesca-12030-08-04pem.crt) |
+| DigiCert Basic RSA CN CA G2 | Mar 4, 2030 | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertbasicrsacncag22030-03-04der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicertbasicrsacncag22030-03-04pem.crt) |
+| DigiCert TLS RSA SHA256 2020 CA1 | Apr 13, 2031 | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlsrsasha2562020ca12031-04-13der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/digicerttlsrsasha2562020ca12031-04-13pem.crt) |
+| GeoTrust RSA CA 2018 | Nov 6, 2027 | 0x0546fe1823f7e1941da39fce14c46173<br>7CCC2A87E3949F20572B18482980505FA90CAC3B | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/geotrustrsaca20182027-11-06der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/digicert/geotrustrsaca20182027-11-06pem.crt) |
+| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27-xsignpem.crt) |
+| Microsoft Azure TLS Issuing CA 01 | Jun 27, 2024 | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca012024-06-27pem.crt) |
+| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27-xsignpem.crt) |
+| Microsoft Azure TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca022024-06-27pem.crt) |
+| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27-xsignpem.crt) |
+| Microsoft Azure TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220427T0114263671Z/media/cafiles/mspki/microsoftazuretlsissuingca052024-06-27pem.crt) |
+| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27-xsignpem.crt) |
+| Microsoft Azure TLS Issuing CA 06 | Jun 27, 2024 | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazuretlsissuingca062024-06-27pem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27-xsignpem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 01 | Jun 27, 2024 | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca012024-06-27pem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27-xsignpem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 02 | Jun 27, 2024 | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca022024-06-27pem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27-xsignpem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 05 | Jun 27, 2024 | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca052024-06-27pem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27-xsignder.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27-xsignpem.crt) |
+| Microsoft Azure ECC TLS Issuing CA 06 | Jun 27, 2024 | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/mspki/microsoftazureecctlsissuingca062024-06-27pem.crt) |
+| Microsoft RSA TLS CA 01 | Oct 8, 2024 | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca012024-10-08der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca012024-10-08pem.crt) |
+| Microsoft RSA TLS CA 02 | Oct 8, 2024 | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 | [DER](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca022024-10-08der.crt)<br>[PEM](https://hubcontentprod.azureedge.net/content/docfx/f770e87c-605e-4620-91ee-8cb4c8d1bf25/20220412T1713331314Z/media/cafiles/ssladmin/microsoftrsatlsca022024-10-08pem.crt) |
+
+## Client compatibility for public PKIs
+
+| Windows | Firefox | iOS | macOS | Android | Java |
+|:--:|:--:|:--:|:--:|:--:|:--:|
+| Windows XP SP3+ | Firefox 32+ | iOS 7+ | OS X Mavericks (10.9)+ | Android SDK 5.x+ | Java JRE 1.8.0_101+ |
+
+## Public key encryption and signature algorithms
+
+Support for the following algorithms, elliptical curves, and key sizes are required:
+
+Signature algorithms:
+- ES256
+- ES384
+- ES512
+- RS256
+- RS384
+- RS512
+
+Elliptical curves:
+- P256
+- P384
+- P521
+
+Key sizes:
+- ECDSA 256
+- ECDSA 384
+- ECDSA 521
+- RSA 2048
+- RSA 3072
+- RSA 4096
+
+## Certificate downloads and revocation lists
+
+The following domains may need to be included in your firewall allowlists to optimize connectivity:
+
+AIA:
+- `cacerts.digicert.com`
+- `cacerts.digicert.cn`
+- `cacerts.geotrust.com`
+- `www.microsoft.com`
+
+CRL:
+- `crl.microsoft.com`
+- `crl3.digicert.com`
+- `crl4.digicert.com`
+- `crl.digicert.cn`
+- `cdp.geotrust.com`
+- `mscrl.microsoft.com`
+- `www.microsoft.com`
+
+OCSP:
+- `ocsp.msocsp.com`
+- `ocsp.digicert.com`
+- `ocsp.digicert.cn`
+- `oneocsp.microsoft.com`
+- `status.geotrust.com`
+
+## Past changes
+
+The C) for additional information.
+
+Microsoft updated Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by the C) for additional information.
+
+## Next steps
+
+To learn more about Certificate Authorities and PKI, see:
+
+- [Microsoft PKI Repository](https://www.microsoft.com/pkiops/docs/repository.htm)
+- [Microsoft PKI Repository, including CRL and policy information](https://www.microsoft.com/pki/mscorp/cps/default.htm)
+- [Azure Firewall Premium certificates](../../firewall/premium-certificates.md)
+- [PKI certificates and Configuration Manager](/mem/configmgr/core/plan-design/security/plan-for-certificates)
+- [Securing PKI](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn786443(v=ws.11))
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/tls-certificate-changes.md
tags: azure-resource-manager
Previously updated : 02/18/2022 Last updated : 04/28/2022 # Azure TLS certificate changes
-Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates do not comply with one of the CA/Browser Forum Baseline requirements and will be revoked on February 15, 2021.
+Microsoft uses TLS certificates from the set of Root Certificate Authorities (CAs) that adhere to the CA/Browser Forum Baseline Requirements. All Azure TLS/SSL endpoints contain certificates chaining up to the Root CAs provided in this article. Changes to Azure endpoints began transitioning in August 2020, with some services completing their updates in 2022. All newly created Azure TLS/SSL endpoints contain updated certificates chaining up to the new Root CAs.
-## When will this change happen?
-
-Existing Azure endpoints have been transitioning in a phased manner since August 13, 2020. All newly created Azure TLS/SSL endpoints contain updated certificates chaining up to the new Root CAs.
-
-All Azure services are impacted by this change. Here are some more details for specific
+All Azure services are impacted by this change. Details for some services are listed below:
- [Azure Active Directory](../../active-directory/index.yml) (Azure AD) services began this transition on July 7, 2020.-- [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub) and [DPS](../../iot-dps/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](https://techcommunity.microsoft.com/t5/internet-of-things/azure-iot-tls-changes-are-coming-and-why-you-should-care/ba-p/1658456).-- [Azure Cosmos DB](../../cosmos-db/index.yml) will begin this transition in July 2022 with an expected completion in October 2022.-- For [Azure Storage](../../storage/index.yml), [click here for details](https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581).-- [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) Azure Cache for Redis is moving away from TLS certificates issued by Baltimore CyberTrust Root starting May 2022. [Click here for details](../../azure-cache-for-redis/cache-whats-new.md).-- For [Azure Instance Metadata Service](../../virtual-machines/linux/instance-metadata-service.md?tabs=linux), see [Azure Instance Metadata Service-Attested data TLS: Critical changes are almost here!](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-instance-metadata-service-attested-data-tls-critical/ba-p/2888953) for details.
+- [Azure IoT Hub](../../iot-hub/iot-hub-tls-support.md) and [DPS](../../iot-dps/tls-support.md) remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. Explore other details provided in [this Azure IoT blog post](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-tls-critical-changes-are-almost-here-and-why-you/ba-p/2393169).
+- [Azure Cosmos DB](/security/benchmark/azure/baselines/cosmos-db-security-baseline) began this transition in July 2022 with an expected completion in October 2022.
+- Details on [Azure Storage](../../storage/common/transport-layer-security-configure-minimum-version.md) TLS certificate changes can be found in [this Azure Storage blog post](https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581).
+- [Azure Cache for Redis](../../azure-cache-for-redis/cache-overview.md) is moving away from TLS certificates issued by Baltimore CyberTrust Root starting May 2022, as described in this [Azure Cache for Redis article](../../azure-cache-for-redis/cache-whats-new.md)
+- [Azure Instance Metadata Service](../../virtual-machines/linux/instance-metadata-service.md) has an expected completion in May 2022, as described in [this Azure Governance and Management blog post](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-instance-metadata-service-attested-data-tls-critical/ba-p/2888953).
-> [!IMPORTANT]
-> Customers may need to update their application(s) after this change to prevent connectivity failures when attempting to connect to Azure Storage.
-https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581
-## What is changing?
+## What changed?
-Today, most of the TLS certificates used by Azure services chain up to the following Root CA:
+Prior to the change, most of the TLS certificates used by Azure services chained up to the following Root CA:
| Common name of the CA | Thumbprint (SHA1) | |--|--| | [Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | d4de20d05e66fc53fe1a50882c78db2852cae474 |
-TLS certificates used by Azure services will chain up to one of the following Root CAs:
+After the change, TLS certificates used by Azure services will chain up to one of the following Root CAs:
| Common name of the CA | Thumbprint (SHA1) | |--|--|
TLS certificates used by Azure services will chain up to one of the following Ro
| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 73a5e64a3bff8316ff0edccc618a906e4eae4d74 | | [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 999a64c37ff47d9fab95f14769891460eec4c3c5 |
-## When can I retire the old intermediate thumbprint?
-
-The current CA certificates will *not* be revoked until February 15, 2021. After that date you can remove the old thumbprints from your code.
-
-If this date changes, you will be notified of the new revocation date.
-
-## Will this change affect me?
+## <a id="will-this-change-affect-me"></a>Was my application impacted?
-We expect that **most Azure customers will not** be impacted. However, your application may be impacted if it explicitly specifies a list of acceptable CAs. This practice is known as certificate pinning.
+If your application explicitly specifies a list of acceptable CAs, your application was likely impacted. This practice is known as certificate pinning. Review the [Microsoft Tech Community article on Azure Storage TLS changes](https://techcommunity.microsoft.com/t5/azure-storage-blog/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581) for more information on how to determine if your services were impacted and next steps.
-Here are some ways to detect if your application is impacted:
+Here are some ways to detect if your application was impacted:
-- Search your source code for the thumbprint, Common Name, and other cert properties of any of the Microsoft IT TLS CAs found [here](https://www.microsoft.com/pki/mscorp/cps/default.htm). If there is a match, then your application will be impacted. To resolve this problem, update the source code include the new CAs. As a best practice, ensure that CAs can be added or edited on short notice. Industry regulations require CA certificates to be replaced within seven days and hence customers relying on pinning need to react swiftly.
+- Search your source code for the thumbprint, Common Name, and other cert properties of any of the Microsoft IT TLS CAs in the [Microsoft PKI repository](https://www.microsoft.com/pki/mscorp/cps/default.htm). If there's a match, then your application will be impacted. To resolve this problem, update the source code include the new CAs. As a best practice, ensure that CAs can be added or edited on short notice. Industry regulations require CA certificates to be replaced within seven days of the change and hence customers relying on pinning need to react swiftly.
-- If you have an application that integrates with Azure APIs or other Azure services and you are unsure if it uses certificate pinning, check with the application vendor.
+- If you have an application that integrates with Azure APIs or other Azure services and you're unsure if it uses certificate pinning, check with the application vendor.
- Different operating systems and language runtimes that communicate with Azure services may require more steps to correctly build the certificate chain with these new roots: - **Linux**: Many distributions require you to add CAs to /etc/ssl/certs. For specific instructions, refer to the distributionΓÇÖs documentation.
Here are some ways to detect if your application is impacted:
- **Android**: Check the documentation for your device and version of Android. - **Other hardware devices, especially IoT**: Contact the device manufacturer. -- If you have an environment where firewall rules are set to allow outbound calls to only specific Certificate Revocation List (CRL) download and/or Online Certificate Status Protocol (OCSP) verification locations. You will need to allow the following CRL and OCSP URLs:
+- If you have an environment where firewall rules are set to allow outbound calls to only specific Certificate Revocation List (CRL) download and/or Online Certificate Status Protocol (OCSP) verification locations, you'll need to allow the following CRL and OCSP URLs:
- http://crl3&#46;digicert&#46;com - http://crl4&#46;digicert&#46;com
Here are some ways to detect if your application is impacted:
## Next steps
-If you have additional questions, contact us through [support](https://azure.microsoft.com/support/options/).
+If you have questions, contact us through [support](https://azure.microsoft.com/support/options/).
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
description: Learn how to create an Azure Service Fabric cluster across Availabi
Previously updated : 03/16/2022 Last updated : 05/13/2022
You don't need to configure the `FaultDomain` and `UpgradeDomain` overrides.
>[!NOTE] > > * Service Fabric clusters should have at least one primary node type. The durability level of primary node types should be Silver or higher.
-> * The Availability Zone that spans virtual machine scale sets should be configured with at least three Availability Zones, no matter the durability level.
-> * Availability Zones that span virtual machine scale sets with Silver or higher durability should have at least 15 VMs.
-> * Availability Zones that span virtual machine scale sets with Bronze durability should have at least six VMs.
+> * An Availability Zone spanning virtual machine scale set should be configured with at least three Availability Zones, no matter the durability level.
+> * An Availability Zone spanning virtual machine scale set with Silver or higher durability should have at least 15 VMs.
+> * An Availaibility Zone spanning virtual machine scale set with Bronze durability should have at least six VMs.
### Enable support for multiple zones in the Service Fabric node type
service-fabric Service Fabric Quickstart Containers Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-containers-linux.md
Title: Create a Linux container app on Service Fabric in Azure description: In this quickstart, you will build a Docker image with your application, push the image to a container registry, and then deploy your container to a Service Fabric cluster. Previously updated : 07/22/2019 Last updated : 05/12/2022 # Quickstart: Deploy Linux containers to Service Fabric
cd service-fabric-containers/Linux/container-tutorial/Voting
To deploy the application to Azure, you need a Service Fabric cluster to run the application. The following commands create a five-node cluster in Azure. The commands also create a self-signed certificate, adds it to a key vault and downloads the certificate locally. The new certificate is used to secure the cluster when it deploys and is used to authenticate clients.
+If you wish, you can modify the variable values to your preference. For example, westus instead of eastus for the location.
+
+> [!NOTE]
+> Key vault names should be universally unique, as they are accessed as https://{vault-name}.vault.azure.net.
+>
```azurecli #!/bin/bash
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Yes, you can create a Capacity Reservation for your VM SKU in the disaster recov
### Why should I reserve capacity using Capacity Reservation at the destination location?
-While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-ca.pacity-reservations-docs)
+While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-capacity-reservations-docs)
### Does Site Recovery work with reserved instances?
Yes, both encryption in transit and [encryption at rest in Azure](../storage/com
- [Review Azure-to-Azure support requirements](azure-to-azure-support-matrix.md). - [Set up Azure-to-Azure replication](azure-to-azure-tutorial-enable-replication.md).-- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html).
+- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html).
spring-cloud How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-availability-zone.md
This article explains availability zones in Azure Spring Cloud, and how to enabl
In Microsoft Azure, [Availability Zones (AZ)](../availability-zones/az-overview.md) are unique physical locations within an Azure region. Each zone is made up of one or more data centers that are equipped with independent power, cooling, and networking. Availability zones protect your applications and data from data center failures.
-When a service in Azure Spring Cloud has availability zone enabled, Azure automatically spreads the application's deployment instance across all three zones in the selected region. If the application's deployment instance count is larger than three and is divisible by three, the instances will be spread evenly. Otherwise, the extra instance counts are spread across the remaining zones.
+When an Azure Spring Cloud service instance is created with availability zone enabled, Azure Spring Cloud will automatically distribute fundamental resources across logical sections of underlying Azure infrastructure. This distribution provides a higher level of availability to protect against a hardware failure or a planned maintenance event.
## How to create an instance in Azure Spring Cloud with availability zone enabled
To create a service in Azure Spring Cloud with availability zone enabled using t
## Region availability Azure Spring Cloud currently supports availability zones in the following regions:+
+- Australia East
+- Brazil South
+- Canada Central
- Central US-- West US 2 - East US-- Australia East-- North Europe - East US 2-- West Europe
+- France Central
+- Germany West Central
+- North Europe
+- Japan East
+- Korea Central
+- South Africa North
- South Central US
+- Southeast Asia
- UK South-- Brazil South-- France Central
+- West Europe
+- West US 2
+- West US 3
+
+> [!NOTE]
+> The following regions could only be created with availability zone enabled by using Azure CLI, and Azure Portal will coming soon.
+>
+> - Canada Central
+> - Germany West Central
+> - Japan East
+> - Korea Central
+> - South Africa North
+> - Southeast Asia
+> - West US 3
## Pricing
There's no extra cost for enabling the availability zone.
## Next steps
-* [Plan for disaster recovery](disaster-recovery.md)
+- [Plan for disaster recovery](disaster-recovery.md)
spring-cloud How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-prepare-app-deployment.md
For details, see the [Java runtime and OS versions](./faq.md?pivots=programming-
To prepare an existing Spring Boot application for deployment to Azure Spring Cloud, include the Spring Boot and Spring Cloud dependencies in the application POM file as shown in the following sections.
-Azure Spring Cloud will support the latest Spring Boot or Spring Cloud release within one month after itΓÇÖs been released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki).
+Azure Spring Cloud will support the latest Spring Boot or Spring Cloud major version starting from 30 days after its release. The latest minor version will be supported as soon as it is released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki).
The following table lists the supported Spring Boot and Spring Cloud combinations:
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Previously updated : 04/15/2022 Last updated : 05/13/2022
For more information on pricing differences between standard-priority and high-p
## Copy an archived blob to an online tier
-The first option for moving a blob from the Archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the Hot or Cool tier. You can use either the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob an online tier, the source blob remains unmodified in the Archive tier.
+The first option for moving a blob from the Archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the Hot or Cool tier. You can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob an online tier, the source blob remains unmodified in the Archive tier.
You must copy the archived blob to a new blob with a different name or to a different container. You cannot overwrite the source blob by copying to the same blob.
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 11/15/2021 Last updated : 05/11/2022
First, download the AzCopy V10 executable file to any directory on your computer
These files are compressed as a zip file (Windows and Mac) or a tar file (Linux). To download and decompress the tar file on Linux, see the documentation for your Linux distribution.
+For detailed information on AzCopy releases see the [AzCopy release page](https://github.com/Azure/azure-storage-azcopy/releases).
+ > [!NOTE] > If you want to copy data to and from your [Azure Table storage](../tables/table-storage-overview.md) service, then install [AzCopy version 7.3](https://aka.ms/downloadazcopynet).
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
The following diagram illustrates the end-to-end workflow for enabling Azure AD
![Diagram showing Azure AD over SMB for Azure Files workflow](media/storage-files-active-directory-enable/azure-active-directory-over-smb-workflow.png)
-## Recommended: Use AES-256 encryption
-
-By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these steps:
-
-As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), open the Azure cloud shell.
-
-Execute the following commands:
-
-```azurepowershell
-# 1. Find the service account in your managed domain that represents the storage account.
-
-$storageAccountName= ΓÇ£<InsertStorageAccountNameHere>ΓÇ¥
-$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
-$userObject = Get-ADUser -filter $searchFilter
-
-if ($userObject -eq $null)
-{
- Write-Error "Cannot find AD object for storage account:$storageAccountName" -ErrorAction Stop
-}
-
-# 2. Set the KerberosEncryptionType of the object
-
-Set-ADUser $userObject -KerberosEncryptionType AES256
-
-# 3. Validate that the object now has the expected (AES256) encryption type.
-
-Get-ADUser $userObject -properties KerberosEncryptionType
-```
- ## Enable Azure AD DS authentication for your account To enable Azure AD DS authentication over SMB for Azure Files, you can set a property on storage accounts by using the Azure portal, Azure PowerShell, or Azure CLI. Setting this property implicitly "domain joins" the storage account with the associated Azure AD DS deployment. Azure AD DS authentication over SMB is then enabled for all new and existing file shares in the storage account.
az storage account update -n <storage-account-name> -g <resource-group-name> --e
```
+## Recommended: Use AES-256 encryption
+
+By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these steps:
+
+As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), open the Azure Cloud Shell.
+
+Execute the following commands:
+
+```azurepowershell
+# 1. Find the service account in your managed domain that represents the storage account.
+
+$storageAccountName= ΓÇ£<InsertStorageAccountNameHere>ΓÇ¥
+$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
+$userObject = Get-ADUser -filter $searchFilter
+
+if ($userObject -eq $null)
+{
+ Write-Error "Cannot find AD object for storage account:$storageAccountName" -ErrorAction Stop
+}
+
+# 2. Set the KerberosEncryptionType of the object
+
+Set-ADUser $userObject -KerberosEncryptionType AES256
+
+# 3. Validate that the object now has the expected (AES256) encryption type.
+
+Get-ADUser $userObject -properties KerberosEncryptionType
+```
+ [!INCLUDE [storage-files-aad-permissions-and-mounting](../../../includes/storage-files-aad-permissions-and-mounting.md)] You have now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file share, follow the instructions in the [Assign access permissions](#assign-access-permissions-to-an-identity) to use an identity and [Configure NTFS permissions over SMB sections](#configure-ntfs-permissions-over-smb).
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool.md
Steps:
9. Verify that the restored dedicated SQL pool (formerly SQL DW) is online.
-10. If the desired destination is a Synapse Workspace, uncomment the code to perform the additional restore step.
+10. **If the desired destination is a Synapse Workspace, uncomment the code to perform the additional restore step.**
1. Create a restore point for the newly created data warehouse. 2. Retrieve the last restore point created by using the "Select -Last 1" syntax. 3. Perform the restore to the desired Synapse workspace.
Get-AzSubscription
Select-AzSubscription -SubscriptionName $SourceSubscriptionName # list all restore points
-Get-AzSynapseSqlPoolRestorePoint -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName -Name $SQLPoolName
+Get-AzSynapseSqlPoolRestorePoint -ResourceGroupName $SourceResourceGroupName -WorkspaceName $SourceWorkspaceName -Name $SourceSQLPoolName
# Pick desired restore point using RestorePointCreationDate "xx/xx/xxxx xx:xx:xx xx" $PointInTime="<RestorePointCreationDate>" # Get the specific SQL pool to restore
-$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName -Name $SQLPoolName
+$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $SourceResourceGroupName -WorkspaceName $SourceWorkspaceName -Name $SourceSQLPoolName
# Transform Synapse SQL pool resource ID to SQL database ID because currently the restore command only accepts the SQL database ID format. $DatabaseID = $SQLPool.Id -replace "Microsoft.Synapse", "Microsoft.Sql" ` -replace "workspaces", "servers" `
Select-AzSubscription -SubscriptionName $TargetSubscriptionName
# Restore database from a desired restore point of the source database to the target server in the desired subscription $RestoredDatabase = Restore-AzSqlDatabase ΓÇôFromPointInTimeBackup ΓÇôPointInTime $PointInTime -ResourceGroupName $TargetResourceGroupName `
- -ServerName $TargetServerName -TargetDatabaseName $TargetDatabaseName ΓÇôResourceId $Database.ID
+ -ServerName $TargetServerName -TargetDatabaseName $TargetDatabaseName ΓÇôResourceId $DatabaseID
# Verify the status of restored database $RestoredDatabase.status
$RestoredDatabase.status
``` -
+## Troubleshooting
+A restore operation can result in a deployment failure based on a "RequestTimeout" exception.
+![Screenshot from resource group deployments dialog of a timeout exception.](../media/sql-pools/restore-sql-pool-troubleshooting-01.png)
+This timeout can be ignored. Review the dedicated SQL pool blade in the portal and it may still have status of "Restoring" and eventually will transition to "Online".
+![Screenshot of SQL pool dialog with the status that shows restoring.](../media/sql-pools/restore-sql-pool-troubleshooting-02.png)
## Next Steps
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
Title: Backup and restore - snapshots, geo-redundant description: Learn how backup and restore works in Azure Synapse Analytics dedicated SQL pool. Use backups to restore your data warehouse to a restore point in the primary region. Use geo-redundant backups to restore to a different geographical region.--++ Previously updated : 11/13/2020-- Last updated : 05/04/2022++
The following lists details for restore point retention periods:
When you drop a dedicated SQL pool, a final snapshot is created and saved for seven days. You can restore the dedicated SQL pool to the final restore point created at deletion. If the dedicated SQL pool is dropped in a paused state, no snapshot is taken. In that scenario, make sure to create a user-defined restore point before dropping the dedicated SQL pool.
-> [!IMPORTANT]
-> If you delete the server/workspace hosting a dedicated SQL pool, all databases that belong to the server/workspace are also deleted and cannot be recovered. You cannot restore a deleted server.
- ## Geo-backups and disaster recovery A geo-backup is created once per day to a [paired data center](../../availability-zones/cross-region-replication-azure.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where dedicated SQL pool is supported. A geo-backup ensures you can restore data warehouse in case you cannot access the restore points in your primary region.
You can either keep the restored data warehouse and the current one, or delete o
To restore a data warehouse, see [Restore a dedicated SQL pool](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal).
-To restore a deleted or paused data warehouse, you can [create a support ticket](sql-data-warehouse-get-started-create-support-ticket.md).
+To restore a deleted data warehouse, see [Restore a deleted database](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server](sql-data-warehouse-restore-from-deleted-server.md).
## Cross subscription restore
-If you need to directly restore across subscription, vote for this capability [here](https://feedback.azure.com/d365community/ide?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) the server across subscriptions to perform a cross subscription restore.
+You can perform a cross-subscription restore by follow the guidance [here](sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell).
## Geo-redundant restore
You can [restore your dedicated SQL pool](sql-data-warehouse-restore-from-geo-ba
> [!NOTE] > To perform a geo-redundant restore you must not have opted out of this feature.
+## Support Process
+
+You can [submit a support ticket](sql-data-warehouse-get-started-create-support-ticket.md) through the Azure portal for Azure Synapse Analytics.
+ ## Next steps For more information about restore points, see [User-defined restore points](sql-data-warehouse-restore-points.md)
synapse-analytics Sql Data Warehouse Restore From Deleted Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md
In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW)
```powershell $SubscriptionID="<YourSubscriptionID>" $ResourceGroupName="<YourResourceGroupName>"
-$ServereName="<YourServerNameWithoutURLSuffixSeeNote>" # Without database.windows.net
+$ServerName="<YourServerNameWithoutURLSuffixSeeNote>" # Without database.windows.net
$DatabaseName="<YourDatabaseName>" $TargetServerName="<YourtargetServerNameWithoutURLSuffixSeeNote>" $TargetDatabaseName="<YourDatabaseName>"
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
To use scaling plans, make sure you follow these guidelines:
## Create a custom RBAC role in the Azure portal
-Before creating your first scaling plan, you'll need to create a custom role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this custom role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly. This custom role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in your subscription. It will also let the service apply actions on both host pools and VMs when there are no active user sessions. For more information about creating custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
+Before creating your first scaling plan, you'll need to create a custom role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this custom role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with autoscale. This custom role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in those subscriptions. It will also let the service apply actions on both host pools and VMs when there are no active user sessions. For more information about creating custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
> [!IMPORTANT]
-> You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscription in order to create and assign the custom role for the service principal on your subscription. This is part of **User Access Administrator** and **Owner** built in roles.
+> You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to create and assign the custom role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles.
-To create and assign the custom role on your subscription with the Azure portal:
+To create the custom role with the Azure portal:
-1. Open the Azure portal and go to **Subscriptions** and select the subscription that contains the host pool you want to use with autoscale.
+1. Open the Azure portal and go to **Subscriptions** and select a subscription that contains a host pool and session host VMs you want to use with autoscale.
-1. Select **Access control (IAM)**
+1. Select **Access control (IAM)**.
-1. Select the **+ Add** button, then select **Add custom role** from the drop-down menu, as shown in the following screenshot:
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot showing the drop-down menu that appears when you select the plus sign and add button in the Access control (I A M) blade in the Azure portal. The option add custom role is highlighted with a red border.](media/add-custom-role.png)
+1. Select the **+ Add** button, then select **Add custom role** from the drop-down menu.
1. Next, on the **Basics** tab, enter a custom role name and add a description. We recommend you name the role *Azure Virtual Desktop Autoscale* with the description *Scales your Azure Virtual Desktop deployment up or down*.
To create and assign the custom role on your subscription with the Azure portal:
1. On the **Permissions** tab, select Next. You'll add the permissions later on the JSON tab.
-1. On the **Assignable scopes** tab, your subscription will be listed. If you also want to assign this custom role to other subscriptions containing host pools, select **Add assignable scopes** and add the relevant subscriptions.
+1. On the **Assignable scopes** tab, your subscription will be listed. If you also want to assign this custom role to other subscriptions containing host pools and session host VMs, select **Add assignable scopes** and add the relevant subscriptions.
1. On the **JSON** tab, select **Edit** and add the following permissions to the `"actions": []` array. These entries must be enclosed within the square brackets.
To create and assign the custom role on your subscription with the Azure portal:
"Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action" ```
- The completed JSON should look like this, with your subscription ID included as an assignable scope:
+ The completed JSON should look like this, with the subscription ID for each subscription included as assignable scopes:
```json {
To create and assign the custom role on your subscription with the Azure portal:
1. Review the configuration and select **Create**. Once the role has been successfully created, select **OK**. Note that it may take a few minutes to display everywhere.
-After you've created the custom role, you'll need to assign it to the Azure Virtual Desktop service principal and grant access.
+After you've created the custom role, you'll need to assign it to the Azure Virtual Desktop service principal and grant access to each subscription.
## Assign the custom role with the Azure portal To assign the custom role with the Azure portal to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to:
-1. In the **Access control (IAM) tab**, select **Add role assignments**.
+1. Sign in to the Azure portal and go to **Subscriptions**. Select a subscription that contains a host pool and session host VMs you want to use with autoscale.
+
+1. Select **Access control (IAM)**.
+
+1. Select the **+ Add** button, then select **Add role assignment** from the drop-down menu.
1. Select the role you just created, for example **Azure Virtual Desktop Autoscale** and select **Next**. 1. On the **Members** tab, select **User, group, or service principal**, then select **+Select members**. In the search bar, enter and select either **Azure Virtual Desktop** or **Windows Virtual Desktop**. Which value you have depends on when the *Microsoft.DesktopVirtualization* resource provider was first registered in your Azure tenant. If you see two entries titled Windows Virtual Desktop, please see the tip below.
-1. Select **Review + assign** to complete the assignment.
+1. Select **Review + assign** to complete the assignment. Repeat this for any other subscriptions that contain host pools and session host VMs you want to use with autoscale.
> [!TIP] > The application ID for the service principal is **9cdead84-a844-4324-93f2-b2e6bb768d07**.
To assign the custom role with the Azure portal to the Azure Virtual Desktop ser
> > 1. Open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type. >
-> 1. Get the object ID (which is unique in each Azure tenant) and store it in a variable:
+> 1. Get the object ID for the service principal (which is unique in each Azure tenant) and store it in a variable:
> > ```powershell > $objId = (Get-AzADServicePrincipal -AppId "9cdead84-a844-4324-93f2-b2e6bb768d07").Id
To assign the custom role with the Azure portal to the Azure Virtual Desktop ser
## Create a scaling plan
-To create a scaling plan:
+Now that you've assigned the custom role to the service principal on your subscriptions, you can create a scaling plan. To create a scaling plan:
1. Open the [Azure portal](https://portal.azure.com).
virtual-desktop Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/move-resources.md
+
+ Title: Move Azure Virtual Desktop resources between regions - Azure
+description: How to move Azure Virtual Desktop resources between regions.
++ Last updated : 05/13/2022+++
+# Move Azure Virtual Desktop resource between regions
+
+In this article, we'll tell you how to move Azure Virtual Desktop resources between Azure regions.
+
+## Important information
+
+When you move Azure Virtual Desktop resources between regions, these are some things you should keep in mind:
+
+- When exporting resources, you must move them as a set. All resources associated with a specific host pool have to stay together. A host pool and its associated app groups need to be in the same region.
+
+- Workspaces and their associated app groups also need to be in the same region.
+
+- All resources to be moved have to be in the same resource group. Template exports require having resources in the same group, so if you want them to be in a different location, you'll need to modify the exported template to change the location of its resources.
+
+- Once you're done moving your resources to a new region, you must delete the original resources. The resource ID of our resources won't change during the moving process, so there will be a name conflict with your old resources if you don't delete them.
+
+- Existing session hosts attached to a host pool that you move will stop working. You'll need to recreate the session hosts in the new region.
+
+## Export a template
+
+The first step to move your resources is to create a template that contains everything you want to move to the new region.
+
+To export a template:
+
+1. In the Azure portal, go to **Resource Groups**, then select the resource group that contains the resources you want to move.
+
+2. Once you've selected the resource group, go to **Overview** > **Resources** and select all the resources you want to move.
+
+3. Select the **...** button in the upper right-hand corner of the **Resources** tab. Once the drop-down menu opens, select **Export template**.
+
+4. Select **Download** to download a local copy of the generated template.
+
+5. Right-click the zip file and select **Extract All**.
+
+## Modify the exported template
+
+Next, you'll need to modify the template to include the region you're moving your resources to.
+
+To modify the template you exported:
+
+1. Open the template.json file you extracted from the zip folder and a text editor of your choice, such as Notepad.
+
+2. In each resource inside the template file, find the "location" property and modify it to the location you want to move them to. For example, if your deployment's currently in the East US region but you want to move it to the West US region, you'd change the "eastus" location to "westus." Learn more about which Azure regions you can use at [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/#geographies).
+
+3. For each host pool, remove the "publicNetworkAccess" parameter, if present.
+
+## Delete original resources
+
+Once you have the template ready, you'll need to delete the original resources to prevent name conflicts.
+
+To delete the original resources:
+
+1. Go back to the **Resources** tab mentioned in [Export a template](#export-a-template) and select all the resources you exported to the template.
+
+2. Next, select the **...** button again, then select **Delete** from the drop-down menu.
+
+3. If you see a message asking you to confirm the deletion, select **Confirm**.
+
+4. Wait a few minutes for the resources to finish deleting. Once you're done, they should disappear from the resource list.
+
+## Deploy the modified template
+
+Finally, you'll need to deploy your modified template in the new region.
+
+To deploy the template:
+
+1. In the Azure portal, search for and select **Deploy a custom template**.
+2. In the custom deployment menu, select **Build your own template in the editor**.
+3. Next, select **Load file** and upload your modified template file.
+
+ >[!NOTE]
+ > Make sure to upload the template.json file, not the parameters.json file.
+
+4. When you're done uploading the template, select **Save**.
+5. In the next menu, select **Review + create**.
+6. Under **Instance details**, make sure the **Region** shows the region you changed the location to in [Modify the exported template](#modify-the-exported-template). If not, select the correct region from the drop-down menu.
+7. If everything looks correct, select **Create**.
+8. Wait a few minutes for the template to deploy. Once it's finished, the resources should appear in your resource list.
+
+## Next steps
+
+- Find out which Azure regions are currently available at [Azure Geographies](https://azure.microsoft.com/global-infrastructure/geographies/#overview).
+
+- See [our Azure Resource Manager templates for Azure Virtual Desktop](https://github.com/Azure/RDS-Templates/tree/master/wvd-templates) for more templates you can use in your deployments after you move your resources.
+
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
Title: Start virtual machine connect - Azure
-description: How to configure the start virtual machine on connect feature.
+ Title: Set up Start VM on Connect for Azure Virtual Desktop
+description: How to set up the Start VM on Connect feature for Azure Virtual Desktop to turn on session host virtual machines only when they're needed.
Previously updated : 04/14/2022 Last updated : 05/13/2022
-# Start Virtual Machine on Connect
+# Set up Start VM on Connect
-The Start Virtual Machine (VM) on Connect feature lets you save costs by allowing end users to turn on their VMs only when they need them. You can then turn off VMs when they're not needed.
+The Start VM On Connect feature lets you reduce costs by enabling end users to turn on their session host virtual machines (VMs) only when they need them. You can them turn off VMs when they're not needed.
->[!NOTE]
->Azure Virtual Desktop (classic) doesn't support this feature.
+You can configure Start VM on Connect for personal or pooled host pools using the Azure portal or PowerShell. Start VM on Connect is a host pool setting.
-## Requirements and limitations
+For personal host pools, Start VM On Connect will only turn on an existing session host VM that has already been assigned or will be assigned to a user. For pooled host pools, Start VM On Connect will only turn on a session host VM when none are turned on and additional VMs will only be turned on when the first VM reaches the session limit.
-You can enable the start VM on Connect feature for personal or pooled host pools using PowerShell and the Azure portal.
+The time it takes for a user to connect to a session host VM that is powered off (deallocated) increases because the VM needs time to turn on again, much like turning on a physical computer. The Remote Desktop client has an indicator that lets the user know the VM is being powered on while they're connecting.
-The following Remote Desktop clients support the Start VM on Connect feature:
+> [!NOTE]
+> Azure Virtual Desktop (classic) doesn't support this feature.
-- [The web client](./user-documentation/connect-web.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [The Windows client (version 1.2.2061 or later)](./user-documentation/connect-windows-7-10.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [The Android client (version 10.0.10 or later)](./user-documentation/connect-android.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [The macOS client (version 10.6.4 or later)](./user-documentation/connect-macos.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [The iOS client (version 10.2.5 or later)](./user-documentation/connect-ios.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [The Microsoft Store client (version 10.2.2005.0 or later)](./user-documentation/connect-microsoft-store.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- The thin clients listed in [Thin client support](./user-documentation/linux-overview.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+## Prerequisites
-## Create a custom role for Start VM on Connect
+You can only configure Start VM on Connect on existing host pools. You can't enable it at the same time you create a new host pool.
-Before you can configure the Start VM on Connect feature, you'll need to assign a subscription-level custom RBAC (role-based access control) role to the Azure Virtual Desktop service principal. This role will let Azure Virtual Desktop manage the VMs in your subscription. This role grants Azure Virtual Desktop the permissions to turn on VMs, check their status, and report diagnostic info. If you want to know more about Azure custom RBAC roles, take a look at [Azure custom roles](../role-based-access-control/custom-roles.md).
+The following Remote Desktop clients support Start VM on Connect:
->[!IMPORTANT]
->You must have global admin permissions in order to assign the RBAC role to the service principal.
+- The [web client](./user-documentation/connect-web.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+- The [Windows client](./user-documentation/connect-windows-7-10.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 1.2.2061 or later)
+- The [Android client](./user-documentation/connect-android.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.0.10 or later)
+- The [macOS client](./user-documentation/connect-macos.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.6.4 or later)
+- The [iOS client](./user-documentation/connect-ios.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.5 or later)
+- The [Microsoft Store client](./user-documentation/connect-microsoft-store.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.2005.0 or later)
+- Thin clients listed in [Thin client support](./user-documentation/linux-overview.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
->[!NOTE]
->If your VMs and host pool are in different subscriptions, the RBAC role needs to be created in the subscription that the VMs are in.
+If you want to configure Start VM on Connect using PowerShell, you'll need to have [the Az.DesktopVirtualization PowerShell module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization) (version 2.1.0 or later) installed on the device you use to run the commands.
-### Use the Azure portal
+You must grant Azure Virtual Desktop access to power on session host VMs, check their status, and report diagnostic information.
-To use the Azure portal to create a custom role for Start VM on Connect:
+## Create a custom RBAC role in the Azure portal
-1. Open the Azure portal and go to **Subscriptions**.
+Before you can configure Start VM on Connect, you'll need to create a custom role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this custom role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent Start VM on Connect from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with Start VM on Connect. This custom role and assignment will allow Azure Virtual Desktop to power on VMs, check their status, and report diagnostic information in those subscriptions. For more information about creating custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
-2. Select the subscription that your VMs are in.
-
-3. Go to **Access control (IAM)** and select **Add a custom role**.
+> [!IMPORTANT]
+> You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to create and assign the custom role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles.
- > [!div class="mx-imgBorder"]
- > ![A screenshot of a drop-down menu from the Add button in Access control (IAM). "Add a custom role" is highlighted in red.](media/add-custom-role.png)
+To create the custom role with the Azure portal:
-4. Next, name the custom role and add a description. We recommend you name it ΓÇ£Start VM on Connect.ΓÇ¥
+1. Open the Azure portal and go to **Subscriptions** and select a subscription that contains a host pool and session host VMs you want to use with Start VM on Connect.
-5. On the **Permissions** tab, add one of the two following sets of permissions to the role:
-
- - Microsoft.Compute/virtualMachines/start/action
- - Microsoft.Compute/virtualMachines/read
- - Microsoft.Compute/virtualMachines/instanceView/read
+1. Select **Access control (IAM)**.
- You can also use these permissions instead:
+1. Select the **+ Add** button, then select **Add custom role** from the drop-down menu.
- - Microsoft.Compute/virtualMachines/start/action
- - Microsoft.Compute/virtualMachines/*/read
+1. Next, on the **Basics** tab, enter a custom role name and add a description. We recommend you name the role *Azure Virtual Desktop Start VM on Connect* with the description *Turns on session host VMs when users connect to them*.
-6. When you're finished, select **Review + create**. It may take a few minutes for the RBAC service to create the custom role.
+1. For baseline permissions, select **Start from scratch** and select **Next**.
-After that, you'll need to assign the role to the Azure Virtual Desktop service principal.
+1. On the **Permissions** tab, select Next. You'll add the permissions later on the JSON tab.
-The following steps describe how to assign the custom role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. On the **Assignable scopes** tab, your subscription will be listed. If you also want to assign this custom role to other subscriptions containing host pools and session host VMs, select **Add assignable scopes** and add the relevant subscriptions.
-1. In the navigation menu of the subscription, select **Access control (IAM)**.
+1. On the **JSON** tab, select **Edit** and add the following permissions to the `"actions": []` array. These entries must be enclosed within the square brackets.
-1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+ ```json
+ "Microsoft.Compute/virtualMachines/start/action",
+ "Microsoft.Compute/virtualMachines/read",
+ "Microsoft.Compute/virtualMachines/instanceView/read"
+ ```
-1. On the **Role** tab, search for and select the role you just created.
+ The completed JSON should look like this, with the subscription ID for each subscription included as assignable scopes:
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Azure Virtual Desktop Start VM on Connect",
+ "description": "Turns on session host VMs when users connect to them",
+ "assignableScopes": [
+ "/subscriptions/00000000-0000-0000-0000-000000000000"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Compute/virtualMachines/start/action",
+ "Microsoft.Compute/virtualMachines/read",
+ "Microsoft.Compute/virtualMachines/instanceView/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
-1. On the **Members** tab, search for and select **Windows Virtual Desktop**.
+1. Select **Save**, then select **Next**.
- > [!NOTE]
- > If you've deployed Azure Virtual Desktop (classic), both the Windows Virtual Desktop and Windows Virtual Desktop Azure Resource Manager Provider first party applications might appear. If so, assign the role to both apps.
- >
+1. Review the configuration and select **Create**. Once the role has been successfully created, select **OK**. Note that it may take a few minutes to display everywhere.
- ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+After you've created the custom role, you'll need to assign it to the Azure Virtual Desktop service principal and grant access to each subscription.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+## Assign the custom role with the Azure portal
-### Create a custom role with a JSON file template
+To assign the custom role with the Azure portal to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to:
-If you're using a JSON file to create the custom role, the following example shows a basic template you can use. Make sure you replace the subscription ID value in *AssignableScopes* with the subscription ID you want to assign the role to.
+1. Open the Azure portal and go to **Subscriptions**. Select a subscription that contains a host pool and session host VMs you want to use with Start VM on Connect.
-```json
-{
- "Name": "Start VM on connect (Custom)",
- "IsCustom": true,
- "Description": "Start VM on connect with AVD (Custom)",
- "Actions": [
- "Microsoft.Compute/virtualMachines/start/action",
- "Microsoft.Compute/virtualMachines/*/read"
- ],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "AssignableScopes": [
- "/subscriptions/00000000-0000-0000-0000-000000000000"
- ]
-}
-```
+1. Select **Access control (IAM)**.
-To use the JSON template, save the JSON file, add the relevant subscription information to *Assignable Scopes*, then run the following cmdlet in PowerShell:
+1. Select the **+ Add** button, then select **Add role assignment** from the drop-down menu.
-```powershell
-New-AzRoleDefinition -InputFile "C:\temp\filename"
-```
+1. Select the role you just created, for example **Azure Virtual Desktop Start VM on Connect** and select **Next**.
-To learn more about creating custom roles, see [Create or update Azure custom roles using Azure PowerShell](../role-based-access-control/custom-roles-powershell.md#create-a-custom-role-with-json-template).
+1. On the **Members** tab, select **User, group, or service principal**, then select **+Select members**. In the search bar, enter and select either **Azure Virtual Desktop** or **Windows Virtual Desktop**. Which value you have depends on when the *Microsoft.DesktopVirtualization* resource provider was first registered in your Azure tenant. If you see two entries titled Windows Virtual Desktop, please see the tip below.
-## Configure the Start VM on Connect feature
+1. Select **Review + assign** to complete the assignment. Repeat this for any other subscriptions that contain host pools and session host VMs you want to use with Start VM on Connect.
-Now that you've assigned your subscription the role, it's time to configure the Start VM on Connect feature!
+> [!TIP]
+> The application ID for the service principal is **9cdead84-a844-4324-93f2-b2e6bb768d07**.
+>
+> If you have an Azure Virtual Desktop (classic) deployment and an Azure Virtual Desktop (Azure Resource Manager) deployment where the *Microsoft.DesktopVirtualization* resource provider was registered before the display name changed, you will see two apps with the same name of *Windows Virtual Desktop*. To add the role assignment to the correct service principal, [you can use PowerShell](../role-based-access-control/role-assignments-powershell.md) which enables you to specify the application ID:
+>
+> To assign the custom role with PowerShell to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to:
+>
+> 1. Open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type.
+>
+> 1. Get the object ID for the service principal (which is unique in each Azure tenant) and store it in a variable:
+>
+> ```powershell
+> $objId = (Get-AzADServicePrincipal -AppId "9cdead84-a844-4324-93f2-b2e6bb768d07").Id
+> ```
+>
+> 1. Find the name of the subscription you want to add the role assignment to by listing all that are available to you:
+>
+> ```powershell
+> Get-AzSubscription
+> ```
+>
+> 1. Get the subscription ID and store it in a variable, replacing the value for `-SubscriptionName` with the name of the subscription from the previous step:
+>
+> ```powershell
+> $subId = (Get-AzSubscription -SubscriptionName "Microsoft Azure Enterprise").Id
+> ```
+>
+> 1. Add the role assignment, where `-RoleDefinitionName` is the name of the custom role you created earlier:
+>
+> ```powershell
+> New-AzRoleAssignment -RoleDefinitionName "Azure Virtual Desktop Start VM on Connect" -ObjectId $objId -Scope /subscriptions/$subId
+> ```
-### Deployment considerations
+## Enable or disable Start VM on Connect
-Start VM on Connect is a host pool setting.
+Now that you've assigned the custom role to the service principal on your subscriptions, you can configure Start VM on Connect using the Azure portal or PowerShell.
-For personal desktops, the feature will only turn on an existing VM that the service has already assigned or will assign to a user. In a pooled host pool scenario, the service will only turn on a VM when none are turned on. The feature will only turn on additional VMs when the first VM reaches the session limit.
+# [Portal](#tab/azure-portal)
->[!IMPORTANT]
-> You can only configure this feature in existing host pools. This feature isn't available when you create a new host pool.
+To configure Start VM on Connect using the Azure portal:
-### Use the Azure portal
+1. Sign in to the [Azure portal](https://portal.azure.com).
-To use the Azure portal to configure Start VM on Connect:
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-1. Open your browser and go to [the Azure portal](https://portal.azure.com).
+1. Select **Host pools**, then select the name of the host pool where you want to enable the setting.
-2. In the Azure portal, go to **Azure Virtual Desktop**.
+1. Select **Properties**.
-3. Select **Host pools**, then go to the host pool where you want to enable the setting.
+1. In the configuration section, you'll see **Start VM on connect**. Select **Yes** to enable it, or **No** to disable it.
-4. In the host pool, select **Properties**. Under **Start VM on connect**, select **Yes**, then select **Save** to instantly apply the setting.
+1. Select **Save**. The new setting is applied.
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Properties window. The Start VM on connect option is highlighted in red.](media/properties-start-vm-on-connect.png)
+# [PowerShell](#tab/azure-powershell)
-### Use PowerShell
+You need to make sure you have the names of the resource group and host pool you want to configure. To configure Start VM on Connect using PowerShell:
-To configure this setting with PowerShell, you need to make sure you have the names of the resource group and host pools you want to configure. You'll also need to install [the Azure PowerShell module (version 2.1.0 or later)](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/2.1.0).
+1. Open a PowerShell prompt.
-To configure Start VM on Connect using PowerShell:
+1. Sign in to Azure using the `Connect-AzAccount` cmdlet. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps)
-1. Open a PowerShell command window.
+1. Find the name of the subscription that contains host pools and session host VMs you want to use with Start VM on Connect by listing all that are available to you:
-2. Run the following cmdlet to enable Start VM on Connect:
+ ```powershell
+ Get-AzSubscription
+ ```
- ```powershell
- Update-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -StartVMOnConnect:$true
- ```
+1. Change your current Azure session to use the subscription you identified in the previous step, replacing the value for `-SubscriptionName` with the name or ID of the subscription:
-3. Run the following cmdlet to disable Start VM on Connect:
+ ```powershell
+ Set-AzContext -Subscription "<subscription name or id>"
+ ```
- ```powershell
- Update-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -StartVMOnConnect:$false
- ```
+1. To enable or disable Start VM on Connect, do one of the following steps:
-## User experience
+ 1. To enable Start VM on Connect, run the following command, replacing the value for `-ResourceGroupName` and `-Name` with your values:
-In typical sessions, the time it takes for a user to connect to a deallocated VM increases because the VM needs time to turn on again, much like turning on a physical computer. The Remote Desktop client has an indicator that lets the user know the PC is being powered on while they're connecting.
+ ```powershell
+ Update-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -StartVMOnConnect:$true
+ ```
-## Troubleshooting
+ 1. To disable Start VM on Connect, run the following command, replacing the value for `-ResourceGroupName` and `-Name` with your values:
-If the feature runs into any issues, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and copy down the error name somewhere for reference.
+ ```powershell
+ Update-AzWvdHostPool -ResourceGroupName <resourcegroupname> -Name <hostpoolname> -StartVMOnConnect:$false
+ ```
-You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
++
+## Troubleshooting
-If the VM doesn't turn on, you'll need to check the health of the VM you tried to turn on before you do anything else.
+If the feature runs into any issues, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
-## Next steps
+If the session host VM doesn't turn on, you'll need to check the health of the VM you tried to turn on as a first step.
-If you run into any issues that the troubleshooting documentation or the diagnostics feature couldn't solve, check out the [Start VM on Connect FAQ](start-virtual-machine-connect-faq.md).
+For other questions, check out the [Start VM on Connect FAQ](start-virtual-machine-connect-faq.md).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Here's what's new for April:
### Use the Start VM on Connect feature (preview) in the Azure portal
-You can now configure Start VM on Connect (preview) in the Azure portal. With this update, users can access their VMs from the Android and macOS clients. To learn more, see [Start VM on Connect](start-virtual-machine-connect.md#use-the-azure-portal).
+You can now configure Start VM on Connect (preview) in the Azure portal. With this update, users can access their VMs from the Android and macOS clients. To learn more, see [Start VM on Connect](start-virtual-machine-connect.md).
### Required URL Check tool
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
This article guides you through how to create an Azure [dedicated host](dedicate
## Limitations - The sizes and hardware types available for dedicated hosts vary by region. Refer to the host [pricing page](https://aka.ms/ADHPricing) to learn more.
+- The fault domain count of the virtual machine scale set can't exceed the fault domain count of the host group.
## Create a host group
-A **host group** is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. When planning for high availability, there are additional options. You can use one or both of the following options with your dedicated hosts:
-- Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use.-- Span across multiple fault domains which are mapped to physical racks.
+A **host group** is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. When planning for high availability, there are more options. You can use one or both of the following options with your dedicated hosts:
+- Span across multiple availability zones. In this case, you're required to have a host group in each of the zones you wish to use.
+- Span across multiple fault domains, which are mapped to physical racks.
-In either case, you are need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1.
+In either case, you need to provide the fault domain count for your host group. If you don't want to span fault domains in your group, use a fault domain count of 1.
You can also decide to use both availability zones and fault domains. ### [Portal](#tab/portal)
-In this example, we will create a host group using 1 availability zone and 2 fault domains.
+In this example, we will create a host group using one availability zone and two fault domains.
1. Open the Azure [portal](https://portal.azure.com). 1. Select **Create a resource** in the upper left corner.
az vm host group create \
--platform-fault-domain-count 1 ```
-The following uses [az vm host group create](/cli/azure/vm/host/group#az-vm-host-group-create) to create a host group by using fault domains only (to be used in regions where availability zones are not supported).
+The following code snippet uses [az vm host group create](/cli/azure/vm/host/group#az-vm-host-group-create) to create a host group by using fault domains only (to be used in regions where availability zones aren't supported).
```azurecli-interactive az vm host group create \
Add the `-SupportAutomaticPlacement true` parameter to have your VMs and scale s
## Create a dedicated host
-Now create a dedicated host in the host group. In addition to a name for the host, you are required to provide the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your dedicated host.
+Now create a dedicated host in the host group. In addition to a name for the host, you're required to provide the SKU for the host. Host SKU captures the supported VM series and the hardware generation for your dedicated host.
For more information about the host SKUs and pricing, see [Azure Dedicated Host pricing](https://aka.ms/ADHPricing).
-If you set a fault domain count for your host group, you will need to specify the fault domain for your host.
+If you set a fault domain count for your host group, you'll need to specify the fault domain for your host.
### [Portal](#tab/portal)
If you set a fault domain count for your host group, you will need to specify th
1. Select *myDedicatedHostsRG* as the **Resource group**. 1. In **Instance details**, type *myHost* for the **Name** and select *East US* for the location. 1. In **Hardware profile**, select *Standard Es3 family - Type 1* for the **Size family**, select *myHostGroup* for the **Host group** and then select *1* for the **Fault domain**. Leave the defaults for the rest of the fields.
-1. When you are done, select **Review + create** and wait for validation.
+1. When you're done, select **Review + create** and wait for validation.
1. Once you see the **Validation passed** message, select **Create** to create the host. ### [CLI](#tab/cli)
It will take a few minutes for your VM to be deployed.
### [CLI](#tab/cli)
-Create a virtual machine within a dedicated host using [az vm create](/cli/azure/vm#az-vm-create). If you specified an availability zone when creating your host group, you are required to use the same zone when creating the virtual machine. Replace the values like image and host name with your own. If you are creating a Windows VM, remove `--generate-ssh-keys` to be prompted for a password.
+Create a virtual machine within a dedicated host using [az vm create](/cli/azure/vm#az-vm-create). If you specified an availability zone when creating your host group, you're required to use the same zone when creating the virtual machine. Replace the values like image and host name with your own. If you're creating a Windows VM, remove `--generate-ssh-keys` to be prompted for a password.
```azurecli-interactive az vm create \
When you deploy a scale set, you specify the host group.
### [CLI](#tab/cli)
-When you deploy a scale set using [az vmss create](/cli/azure/vmss#az-vmss-create), you specify the host group using `--host-group`. In this example, we are deploying the latest Ubuntu LTS image. To deploy a Windows image, replace the value of `--image` and remove `--generate-ssh-keys` to be prompted for a password.
+When you deploy a scale set using [az vmss create](/cli/azure/vmss#az-vmss-create), you specify the host group using `--host-group`. In this example, we're deploying the latest Ubuntu LTS image. To deploy a Windows image, replace the value of `--image` and remove `--generate-ssh-keys` to be prompted for a password.
```azurecli-interactive az vmss create \
If you want to manually choose which host to deploy the scale set to, add `--hos
You can add an existing VM to a dedicated host, but the VM must first be Stop\Deallocated. Before you move a VM to a dedicated host, make sure that the VM configuration is supported: -- The VM size must be in the same size family as the dedicated host. For example, if your dedicated host is DSv3, then the VM size could be Standard_D4s_v3, but it could not be a Standard_A4_v2.
+- The VM size must be in the same size family as the dedicated host. For example, if your dedicated host is DSv3, then the VM size could be Standard_D4s_v3, but it couldn't be a Standard_A4_v2.
- The VM needs to be located in same region as the dedicated host. - The VM can't be part of a proximity placement group. Remove the VM from the proximity placement group before moving it to a dedicated host. For more information, see [Move a VM out of a proximity placement group](./windows/proximity-placement-groups.md#move-an-existing-vm-out-of-a-proximity-placement-group) - The VM can't be in an availability set.
Move the VM to a dedicated host using the [portal](https://portal.azure.com).
1. Select **Stop** to stop\deallocate the VM. 1. Select **Configuration** from the left menu. 1. Select a host group and a host from the drop-down menus.
-1. When you are done, select **Save** at the top of the page.
+1. When you're done, select **Save** at the top of the page.
1. After the VM has been added to the host, select **Overview** from the left menu. 1. At the top of the page, select **Start** to restart the VM.
Tags : {}
## Deleting hosts
-You are being charged for your dedicated hosts even when no virtual machines are deployed. You should delete any hosts you are currently not using to save costs.
++
+being charged for your dedicated hosts even when no virtual machines are deployed. You should delete any hosts you're currently not using to save costs.
You can only delete a host when there are no any longer virtual machines using it.
After deleting the VMs, you can delete the host using [az vm host delete](/cli/a
az vm host delete -g myDHResourceGroup --host-group myHostGroup --name myHost ```
-Once you have deleted all of your hosts, you may delete the host group using [az vm host group delete](/cli/azure/vm/host/group#az-vm-host-group-delete).
+Once you've deleted all of your hosts, you may delete the host group using [az vm host group delete](/cli/azure/vm/host/group#az-vm-host-group-delete).
```azurecli-interactive az vm host group delete -g myDHResourceGroup --host-group myHostGroup
After deleting the VMs, you can delete the host using [Remove-AzHost](/powershel
Remove-AzHost -ResourceGroupName $rgName -Name myHost ```
-Once you have deleted all of your hosts, you may delete the host group using [Remove-AzHostGroup](/powershell/module/az.compute/remove-azhostgroup).
+Once you've deleted all of your hosts, you may delete the host group using [Remove-AzHostGroup](/powershell/module/az.compute/remove-azhostgroup).
```azurepowershell-interactive Remove-AzHost -ResourceGroupName $rgName -Name myHost
Remove-AzResourceGroup -Name $rgName
- For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There's sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
Previously updated : 01/20/2022 Last updated : 05/09/2022
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
} ```
+## Force Delete for VMs
+
+Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and Rest API.
+
+### [Portal](#tab/portal3)
+
+When you go to delete an existing VM, you will find an option to apply force delete in the delete pane.
+
+1. Open the [portal](https://portal.azure.com).
+1. Navigate to your virtual machine.
+1. On the **Overview** page, select **Delete**.
+1. In the **Delete virtual machine** pane, select the checkbox for **Apply force delete**.
+1. Select **Ok**.
+
+### [CLI](#tab/cli3)
+
+Use the `--force-deletion` parameter for [az vm delete](/cli/azure/vm?view=azure-cli-latest#az-vm-delete&preserve-view=true).
+
+```azurecli-interactive
+az vm delete \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --force-deletion
+```
+
+### [PowerShell](#tab/powershell3)
+
+Use the `-ForceDeletion` parameter for [Remove-AzVm](/powershell/module/az.compute/remove-azvm).
+
+```azurepowershell
+Remove-AzVm `
+ -ResourceGroupName "myResourceGroup" `
+ -Name "myVM" `
+ -ForceDeletion $true
+```
+
+### [REST](#tab/rest3)
+
+You can use the Azure REST API to apply force delete to your virtual machines. Use the `forceDeletion` parameter for [Virtual Machines - Delete](/rest/api/compute/virtual-machines/delete).
+++
+## Force Delete for virtual machine scale sets
+
+Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and Rest API.
+
+### [Portal](#tab/portal4)
+
+When you go to delete an existing virtual machine scale set, you will find an option to apply force delete in the delete pane.
+
+1. Open the [portal](https://portal.azure.com).
+1. Navigate to your virtual machine scale set.
+1. On the **Overview** page, select **Delete**.
+1. In the **Delete virtual machine scale set** pane, select the checkbox for **Apply force delete**.
+1. Select **Ok**.
+
+### [CLI](#tab/cli4)
+
+Use the `--force-deletion` parameter for [az vmss delete](/cli/azure/vmss?view=azure-cli-latest#az-vmss-delete&preserve-view=true).
+
+```azurecli-interactive
+az vmss delete \
+ --resource-group myResourceGroup \
+ --name myVMSS \
+ --force-deletion
+```
+
+### [PowerShell](#tab/powershell4)
+
+Use the `-ForceDeletion` parameter for [Remove-AzVmss](/powershell/module/az.compute/remove-azvmss).
+
+```azurepowershell
+Remove-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -Name "myVMSS" `
+ -ForceDeletion $true
+```
+
+### [REST](#tab/rest4)
+
+You can use the Azure REST API to apply force delete to your virtual machine scale set. Use the `forceDeletion` parameter for [Virtual Machines Scale Sets - Delete](/rest/api/compute/virtual-machine-scale-sets/delete).
+++ ## FAQ ### Q: Does this feature work with shared disks?
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
+> [!NOTE]
+> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+ ### Certificate readiness To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
+> [!NOTE]
+> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+ ### Certificate readiness To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
To utilize the Azure BYOIP feature, you must perform the following steps prior t
* After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft, where it will be verified to determine its authenticity and correctness as part of the provisioning process.
+> [!NOTE]
+> It is also recommended to create a ROA for any existing ASN that is advertising the range to avoid any issues during migration.
+ ### Certificate readiness To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
For more information, see [Networking for Azure Virtual Machine Scale Sets](../.
Learn how to assign a public IP address to the following resources: - A [Windows](../../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) Virtual Machine on creation. Add IP to an [existing virtual machine](./virtual-network-network-interface-addresses.md#add-ip-addresses).-- [Public load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Application Gateway](../../application-gateway/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Site-to-site connection using a VPN gateway](../../vpn-gateway/tutorial-site-to-site-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Virtual Machine Scale Set](../../virtual-machine-scale-sets/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [NAT gateway](../nat-gateway/quickstart-create-nat-gateway-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Azure Bastion](../../bastion/quickstart-host-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Azure Firewall](../../firewall/tutorial-firewall-deploy-portal-policy.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- [Public load balancer](/configure-public-ip-load-balancer.md)
- [Cross-region load balancer](../../load-balancer/tutorial-cross-region-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- [Application Gateway](/configure-public-ip-application-gateway.md)
+- [Site-to-site connection using a VPN gateway](configure-public-ip-vpn-gateway.md)
+- [NAT gateway](/configure-public-ip-nat-gateway.md)
+- [Azure Bastion](/configure-public-ip-bastion.md)
+- [Azure Firewall](/configure-public-ip-firewall.md)
## Region availability
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is a fully managed and distributed service. It doesn't depen
### Scalability
+Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage.
+ A NAT gateway resource can be associated to a subnet and can be used by all compute resources in that subnet. All subnets in a virtual network can use the same resource. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound. ### Performance
Virtual Network NAT is a software defined networking service. A NAT gateway won'
## Virtual Network NAT basics
-A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment.
+* A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment.
-Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage.
+* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
+
+* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
-* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet
+* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported.
-* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more
+* A NAT gateway resource can use up to 16 IP addresses in any combination of:
-* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported
+ * Public IP addresses
-* A NAT gateway resource can use a:
+ * Public IP prefixes
- * Public IP
+ * Custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](/azure/virtual-network/ip-services/custom-ip-address-prefix)
- * Public IP prefix
+* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix.
-* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
+* Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
-* To upgrade a basic load balancer to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+ * To upgrade a basic load balancer to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md).
-* To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+ * To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md).
-* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md)
+* A NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address.
- * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
+* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
-* A NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet
+ * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md).
-* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway
+* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
* A NAT gateway canΓÇÖt span multiple virtual networks
Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-
* A NAT gateway canΓÇÖt be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
-* Virtual machine instances or other compute resources, send TCP reset packets or attempt to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted
+* Virtual machine instances or other compute resources, send TCP reset packets or attempt to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
-* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives
+* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives.
## Pricing and SLA
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
## Next steps
-* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md)
+* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md).
-* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4)
+* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4).
-* Learn about the [NAT gateway resource](./nat-gateway-resource.md)
+* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md
Azure routes traffic between all subnets within a virtual network, by default. Y
This tutorial uses the [Azure portal](https://portal.azure.com). You can also use [Azure CLI](tutorial-create-route-table-cli.md) or [Azure PowerShell](tutorial-create-route-table-powershell.md).
+## Overview
+
+This diagram shows the resources created in this tutorial along with the expected network routes.
++ ## Prerequisites Before you begin, you require an Azure account with an active subscription. If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
virtual-wan Howto Connect Vnet Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub-powershell.md
+
+ Title: 'Connect a VNet to a Virtual WAN hub - PowerShell'
+
+description: Learn how to connect a VNet to a Virtual WAN hub using PowerShell.
+++ Last updated : 05/13/2022+++
+# Connect a virtual network to a Virtual WAN hub - PowerShell
+
+This article helps you connect your virtual network to your virtual hub using PowerShell. You can also use the [Azure portal](howto-connect-vnet-hub.md) to complete this task. Repeat these steps for each VNet that you want to connect.
+
+> [!NOTE]
+>
+> * A virtual network can only be connected to one virtual hub at a time.
+> * In order to connect it to a virtual hub, the remote virtual network can't have a gateway.
+
+## Prerequisites
+
+* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+* This tutorial creates a NAT rule on a VPN gateway that will be associated with a VPN site connection. The steps assume that you have an existing Virtual WAN VPN gateway connection to two branches with overlapping address spaces.
+
+### Azure PowerShell
++
+## <a name="signin"></a>Sign in
++
+## Add a connection
+
+1. Declare the variables for the existing resources including the existing Virtual Network.
+
+ ```azurepowershell-interactive
+ $resourceGroup = Get-AzResourceGroup -ResourceGroupName "testRG"
+ $virtualWan = Get-AzVirtualWan -ResourceGroupName "testRG" -Name "myVirtualWAN"
+ $virtualHub = Get-AzVirtualHub -ResourceGroupName "testRG" -Name "westushub"
+ $remoteVirtualNetwork = Get-AzVirtualNetwork -Name "MyVirtualNetwork" -ResourceGroupName "testRG"
+ ```
+
+1. You can create a connection between a new virtual network or an already existing virtual network to peer the Virtual Network to the Virtual Hub. To create the connection:
+
+ ```azurepowershell-interactive
+ New-AzVirtualHubVnetConnection -ResourceGroupName "testRG" -VirtualHubName "westushub" -Name "testvnetconnection" -RemoteVirtualNetwork $remoteVirtualNetwork
+ ```
+
+## Next steps
+
+For more information about Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).
virtual-wan Howto Connect Vnet Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-connect-vnet-hub.md
Title: Connect a VNet to a Virtual WAN hub
+ Title: 'Connect a VNet to a Virtual WAN hub - portal'
-description: Learn how connect a VNet to a Virtual WAN hub using the portal.
-
+description: Learn how to connect a VNet to a Virtual WAN hub using the portal.
- Previously updated : 07/29/2021 Last updated : 05/13/2022
-# Connect a virtual network to a Virtual WAN hub
+# Connect a virtual network to a Virtual WAN hub - portal
-This article helps you connect your virtual network to your virtual hub. Repeat these steps for each VNet that you want to connect.
+This article helps you connect your virtual network to your virtual hub using the Azure portal. You can also use [PowerShell](howto-connect-vnet-hub-powershell.md) to complete this task. Repeat these steps for each VNet that you want to connect.
> [!NOTE]
-> 1. A virtual network can only be connected to one virtual hub at a time.
-> 2. In order to connect it to a virtual hub, the remote virtual network must not have any gateway.
+>
+> * A virtual network can only be connected to one virtual hub at a time.
+> * In order to connect it to a virtual hub, the remote virtual network can't have a gateway.
## Add a connection