Updates from: 05/26/2023 01:11:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
If needed, you can [create the required network security group and rules using A
For Outbound connectivity, you can either keep **AllowVnetOutbound** and **AllowInternetOutBound** or restrict Outbound traffic by using ServiceTags listed in the following table. The ServiceTag for AzureUpdateDelivery must be added via [PowerShell](powershell-create-instance.md).
-Filtered Outbound traffic is not supported on Classic deployments.
- | Outbound port number | Protocol | Source | Destination | Action | Required | Purpose | |:--:|:--:|::|:-:|::|:--:|:-:|
Filtered Outbound traffic is not supported on Classic deployments.
* Used to perform management tasks using PowerShell remoting in your managed domain. * Without access to this port, your managed domain can't be updated, configured, backed-up, or monitored.
-* For managed domains that use a Resource Manager-based virtual network, you can restrict inbound access to this port to the *AzureActiveDirectoryDomainServices* service tag.
- * For legacy managed domains using a Classic-based virtual network, you can restrict inbound access to this port to the following source IP addresses: *52.180.183.8*, *23.101.0.70*, *52.225.184.198*, *52.179.126.223*, *13.74.249.156*, *52.187.117.83*, *52.161.13.95*, *104.40.156.18*, and *104.40.87.209*.
-
- > [!NOTE]
- > In 2017, Azure AD Domain Services became available to host in an Azure Resource Manager network. Since then, we have been able to build a more secure service using the Azure Resource Manager's modern capabilities. Because Azure Resource Manager deployments fully replace classic deployments, Azure AD DS classic virtual network deployments will be retired on March 1, 2023.
- >
- > For more information, see the [official deprecation notice](https://azure.microsoft.com/updates/we-are-retiring-azure-ad-domain-services-classic-vnet-support-on-march-1-2023/)
+* You can restrict inbound access to this port to the *AzureActiveDirectoryDomainServices* service tag.
### Port 3389 - management using remote desktop
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
Previously updated : 06/16/2022 Last updated : 05/25/2023 #Customer intent: As an identity administrator, I want to create and use replica sets in Azure Active Directory Domain Services to provide resiliency or geographical distributed managed domain data.
To complete this tutorial, you need the following resources and privileges:
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][tutorial-create-instance]. > [!IMPORTANT]
- > Managed domains created using the Classic deployment model can't use replica sets. You also need to use a minimum of *Enterprise* SKU for your managed domain. If needed, [change the SKU for a managed domain][howto-change-sku].
+ > You need to use a minimum of *Enterprise* SKU for your managed domain to support replica sets. If needed, [change the SKU for a managed domain][howto-change-sku].
## Sign in to the Azure portal
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
To access the workbook template for the security overview report, complete the f
1. Select your managed domain, such as *aaddscontoso.com* 1. From the menu on the left-hand side, choose **Monitoring > Workbooks**
- ![Screenshot that hightlights where to select the Security Overview Report and the Account Activity Report.](./media/use-azure-monitor-workbooks/select-workbooks-in-azure-portal.png)
+ ![Screenshot that highlights where to select the Security Overview Report and the Account Activity Report.](./media/use-azure-monitor-workbooks/select-workbooks-in-azure-portal.png)
1. Choose the **Security Overview Report**. 1. From the drop-down menus at the top of the workbook, select your Azure subscription and then an Azure Monitor workspace.
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
The following considerations apply for this authentication contact info:
## Security questions and answers
-The security questions and answers are stored securely in your Azure AD tenant and are only accessible to users via the [SSPR registration portal](https://aka.ms/ssprsetup). Administrators can't see, set, or modify the contents of another users' questions and answers.
+The security questions and answers are stored securely in your Azure AD tenant and are only accessible to users via My Security-Info's [Combined registration experience](https://aka.ms/mfasetup). Administrators can't see, set, or modify the contents of another users' questions and answers.
## What happens when a user registers
active-directory Howto Call A Web Api With Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-call-a-web-api-with-postman.md
Previously updated : 03/14/2023 Last updated : 05/25/2023 zone_pivot_groups: web-api-howto-prereq- #Customer intent: As a software developer, I want to call a protected ASP.NET Core Web API using the Microsoft identity platform with Postman
-# Call an ASP.NET Core web API with Postman
+# Call an ASP.NET Core web API with Postman
::: zone pivot="no-api"
-This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. In this article, you'll register a web app and a web API in a tenant on the Azure portal. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using Postman.
+This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. In this article, you'll register a web app and a web API in a tenant on the Azure portal. The web app is used to get an access token generated by the Microsoft identity platform. Next, you'll use the token to make an authorized call to the web API using Postman.
::: zone-end ::: zone pivot="api"
-This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. Following on from the [Tutorial: Implement a protected endpoint to your API](web-api-tutorial-03-protect-endpoint.md), where you created a protected API, you'll need to register a web application with the Microsoft identity platform to generate an access token. Next, you'll use the token to make an authorized call to the API using Postman.
+This article shows you how to call a protected ASP.NET Core web API using [Postman](https://www.postman.com/). Postman is an application that lets you send HTTP requests to a web API to test its authorization and access control (authentication) policies. Following on from the [Tutorial: Implement a protected endpoint to your API](web-api-tutorial-03-protect-endpoint.md), where you created a protected API, you'll need to register a web application with the Microsoft identity platform to generate an access token. Next, you'll use the token to make an authorized call to the API using Postman.
::: zone-end
-## Prerequisites
+## Prerequisites
::: zone pivot="no-api" -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -- This Azure account must have permissions to manage applications. Any of the following Azure Active Directory (Azure AD) roles include the required permissions:
- - Application administrator
- - Application developer
- - Cloud application administrator
-- [Download and install Postman](https://www.postman.com/downloads/).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- This Azure account must have permissions to manage applications. Any of the following Azure Active Directory (Azure AD) roles include the required permissions:
+ - Application administrator
+ - Application developer
+ - Cloud application administrator
+- [Download and install Postman](https://www.postman.com/downloads/).
- A minimum requirement of [.NET Core 6.0 SDK](https://dotnet.microsoft.com/download/dotnet). ::: zone-end ::: zone pivot="api" -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -- This Azure account must have permissions to manage applications. Any of the following Azure Active Directory (Azure AD) roles include the required permissions:
- - Application administrator
- - Application developer
- - Cloud application administrator
-- Completion of the tutorial series:
- - [Tutorial: Register web API with the Microsoft identity platform](web-api-tutorial-01-register-app.md).
- - [Tutorial: Create and configure an ASP.NET Core project for authentication](web-api-tutorial-02-prepare-api.md).
- - [Tutorial: Implement a protected endpoint to your API](web-api-tutorial-03-protect-endpoint.md).
-- [Download and install Postman](https://www.postman.com/downloads/).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- This Azure account must have permissions to manage applications. Any of the following Azure Active Directory (Azure AD) roles include the required permissions:
+ - Application administrator
+ - Application developer
+ - Cloud application administrator
+- Completion of the tutorial series:
+ - [Tutorial: Register web API with the Microsoft identity platform](web-api-tutorial-01-register-app.md).
+ - [Tutorial: Create and configure an ASP.NET Core project for authentication](web-api-tutorial-02-prepare-api.md).
+ - [Tutorial: Implement a protected endpoint to your API](web-api-tutorial-03-protect-endpoint.md).
+- [Download and install Postman](https://www.postman.com/downloads/).
::: zone-end
-## Register an application with the Microsoft identity platform
+## Register an application with the Microsoft identity platform
-The Microsoft identity platform requires your application to be registered before providing identity and access management services. The application registration allows you to specify the name and type of the application and the sign-in audience. The sign-in audience specifies what types of user accounts are allowed to sign-in to a given application.
+The Microsoft identity platform requires your application to be registered before providing identity and access management services. The application registration allows you to specify the name and type of the application and the sign-in audience. The sign-in audience specifies what types of user accounts are allowed to sign-in to a given application.
::: zone pivot="no-api"
-### Register the web API
+### Register the web API
-Follow these steps to create the web API registration:
+Follow these steps to create the web API registration:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations > New registration**.
-1. Enter a **Name** for the application, such as *NewWebAPI1*.
-1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option.
-1. Select **Register**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations > New registration**.
+1. Enter a **Name** for the application, such as _NewWebAPI1_.
+1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option.
+1. Select **Register**.
- :::image type="content" source="./media/web-api-tutorial-01-register-app/register-application.png" alt-text="Screenshot that shows how to enter a name and select the account type.":::
+ :::image type="content" source="./media/web-api-tutorial-01-register-app/register-application.png" alt-text="Screenshot that shows how to enter a name and select the account type.":::
-1. The application's **Overview** pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code.
+1. The application's **Overview** pane is displayed when registration is complete. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in your application source code.
- :::image type="content" source="./media/web-api-tutorial-01-register-app/record-identifiers.png" alt-text="Screenshot that shows the identifier values on the overview page.":::
+ :::image type="content" source="./media/web-api-tutorial-01-register-app/record-identifiers.png" alt-text="Screenshot that shows the identifier values on the overview page.":::
> [!NOTE]
-> The **Supported account types** can be changed by referring to [Modify the accounts supported by an application](howto-modify-supported-accounts.md).
+> The **Supported account types** can be changed by referring to [Modify the accounts supported by an application](howto-modify-supported-accounts.md).
-#### Expose the API
+#### Expose the API
-Once the API is registered, you can configure its permission by defining the scopes that the API exposes to client applications. Client applications request permission to perform operations by passing an access token along with its requests to the protected web API. The web API then performs the requested operation only if the access token it receives is valid.
+Once the API is registered, you can configure its permission by defining the scopes that the API exposes to client applications. Client applications request permission to perform operations by passing an access token along with its requests to the protected web API. The web API then performs the requested operation only if the access token it receives is valid.
-1. Under **Manage**, select **Expose an API > Add a scope**. Accept the proposed **Application ID URI** `(api://{clientId})` by selecting **Save and continue**. The `{clientId}` is the value recorded from the **Overview** page. Then enter the following information:
- 1. For **Scope name**, enter `Forecast.Read`.
- 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
- 1. In the **Admin consent display name** box, enter `Read forecast data`.
- 1. In the **Admin consent description** box, enter `Allows the application to read weather forecast data`.
- 1. In the **User consent display name** box, enter `Read forecast data`.
- 1. In the **User consent description** box, enter `Allows the application to read weather forecast data`.
- 1. Ensure that the **State** is set to **Enabled**.
-1. Select **Add scope**. If the scope has been entered correctly, it's listed in the **Expose an API** pane.
+1. Under **Manage**, select **Expose an API > Add a scope**. Accept the proposed **Application ID URI** `(api://{clientId})` by selecting **Save and continue**. The `{clientId}` is the value recorded from the **Overview** page. Then enter the following information:
+ 1. For **Scope name**, enter `Forecast.Read`.
+ 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
+ 1. In the **Admin consent display name** box, enter `Read forecast data`.
+ 1. In the **Admin consent description** box, enter `Allows the application to read weather forecast data`.
+ 1. In the **User consent display name** box, enter `Read forecast data`.
+ 1. In the **User consent description** box, enter `Allows the application to read weather forecast data`.
+ 1. Ensure that the **State** is set to **Enabled**.
+1. Select **Add scope**. If the scope has been entered correctly, it's listed in the **Expose an API** pane.
:::image type="content" source="./media/web-api-tutorial-01-register-app/add-a-scope-inline.png" alt-text="Screenshot that shows the field values when adding the scope to an API." lightbox="./media/web-api-tutorial-01-register-app/add-a-scope-expanded.png":::
-
+ ::: zone-end ::: zone pivot="api" - ::: zone-end
-### Register the web app
+### Register the web app
-Having a web API isn't enough however, as a web app is also needed to obtain an access token to access the web API you've created.
+Having a web API isn't enough however, as a web app is also needed to obtain an access token to access the web API you've created.
-Follow these steps to create the web app registration:
+Follow these steps to create the web app registration:
::: zone pivot="no-api"
-1. Select **Home** to return to the home page. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a **Name** for the application, such as `web-app-calls-web-api`.
-1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option.
-1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
-1. Select **Register**.
+1. Select **Home** to return to the home page. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for the application, such as `web-app-calls-web-api`.
+1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option.
+1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
+1. Select **Register**.
::: zone-end ::: zone pivot="api"
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If access to multiple tenants is available, use the Directories + subscriptions filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a Name for the application, such as `web-app-calls-web-api`.
-1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option.
-1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
-1. Select **Register**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If access to multiple tenants is available, use the Directories + subscriptions filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a Name for the application, such as `web-app-calls-web-api`.
+1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option.
+1. Under **Redirect URI (optional)**, select **Web**, and then enter `http://localhost` in the URL text box.
+1. Select **Register**.
::: zone-end
-When registration is complete, the Azure portal displays the app registration's **Overview** pane. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
+When registration is complete, the Azure portal displays the app registration's **Overview** pane. Record the **Directory (tenant) ID** and the **Application (client) ID** to be used in later steps.
-#### Add a client secret
+#### Add a client secret
-A client secret is a string value your app can use to identity itself, and is sometimes referred to as an *application password*. The web app uses the client secret to prove its identity when it requests tokens.
+A client secret is a string value your app can use to identity itself, and is sometimes referred to as an _application password_. The web app uses the client secret to prove its identity when it requests tokens.
-Follow these steps to configure a client secret:
+Follow these steps to configure a client secret:
-1. From the **Overview** pane in the Azure portal, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Add a description for your client secret, for example *My client secret*.
-1. Select an expiration for the secret or specify a custom lifetime.
+1. From the **Overview** pane in the Azure portal, under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+1. Add a description for your client secret, for example _My client secret_.
+1. Select an expiration for the secret or specify a custom lifetime.
- - A client secret's lifetime is limited to two years (24 months) or less. You can't specify a custom lifetime longer than 24 months.
- - Microsoft recommends that you set an expiration value of less than 12 months.
+ - A client secret's lifetime is limited to two years (24 months) or less. You can't specify a custom lifetime longer than 24 months.
+ - Microsoft recommends that you set an expiration value of less than 12 months.
-1. Select **Add**.
-1. Be sure to record the **Value** of the client secret. This secret value is **never displayed again** after you leave this page.
+1. Select **Add**.
+1. Be sure to record the **Value** of the client secret. This secret value is **never displayed again** after you leave this page.
-#### Add permissions to access your web API
+#### Add permissions to access your web API
-By specifying a web API's scopes, the web app can obtain an access token containing the scopes provided by the Microsoft identity platform. Within the code, the web API can then provide permission-based access to its resources based on the scopes found in the access token.
+By specifying a web API's scopes, the web app can obtain an access token containing the scopes provided by the Microsoft identity platform. Within the code, the web API can then provide permission-based access to its resources based on the scopes found in the access token.
-Follow these steps to configure client's permissions to the web API:
+Follow these steps to configure client's permissions to the web API:
-1. From the **Overview** pane of your application in the Azure portal, under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
-1. Select **NewWebAPI1** or the API that you wish to add permissions to.
-1. Under **Select permissions**, check the box next to **Forecast.Read**. You may need to expand the **Permission** list. This selects the permissions the client app should have on behalf of the signed-in user.
-1. Select **Add permissions** to complete the process.
+1. From the **Overview** pane of your application in the Azure portal, under **Manage**, select **API permissions** > **Add a permission** > **My APIs**.
+1. Select **NewWebAPI1** or the API that you wish to add permissions to.
+1. Under **Select permissions**, check the box next to **Forecast.Read**. You may need to expand the **Permission** list. This selects the permissions the client app should have on behalf of the signed-in user.
+1. Select **Add permissions** to complete the process.
-After adding these permissions to your API, you should see the selected permissions under **Configured permissions**.
+After adding these permissions to your API, you should see the selected permissions under **Configured permissions**.
-You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app in the Azure portal.
+You may also notice the **User.Read** permission for the Microsoft Graph API. This permission is added automatically when you register an app in the Azure portal.
::: zone pivot="no-api"
-## Test the web API
+## Test the web API
+
+1. Clone the [ms-identity-docs-code-dotnet](https://github.com/Azure-Samples/ms-identity-docs-code-dotnet) repository.
-1. Clone the [ms-identity-docs-code-dotnet](https://github.com/Azure-Samples/ms-identity-docs-code-dotnet) repository.
-
```bash git clone https://github.com/Azure-Samples/ms-identity-docs-code-dotnet.git
- ```
+ ```
-1. Navigate to `ms-identity-docs-code-dotnet/web-api` folder and open `appsettings.json`, replace the `{APPLICATION_CLIENT_ID}` and `{DIRECTORY_TENANT_ID}` with:
+1. Navigate to `ms-identity-docs-code-dotnet/web-api` folder and open `appsettings.json`, replace the `{APPLICATION_CLIENT_ID}` and `{DIRECTORY_TENANT_ID}` with:
- - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
- - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
+ - `{APPLICATION_CLIENT_ID}` is the web API **Application (client) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
+ - `{DIRECTORY_TENANT_ID}` is the web API **Directory (tenant) ID** on the app's **Overview** pane **App registrations** in the Azure portal.
-1. Execute the following command to start the app:
+1. Execute the following command to start the app:
```bash
- dotnet run
- ```
+ dotnet run
+ ```
-1. An output similar to the following will appear. Record the port number in the `https://localhost:{port}` URL.
+1. An output similar to the following will appear. Record the port number in the `https://localhost:{port}` URL.
```bash
- ...
+ ...
info: Microsoft.Hosting.Lifetime[14] Now listening on: https://localhost:{port} ...
- ```
+ ```
::: zone-end
You may also notice the **User.Read** permission for the Microsoft Graph API. Th
## Test the web API
-1. Navigate to the web API that was created in [Tutorial: Create an ASP.NET Core project and configure the API](web-api-tutorial-02-prepare-api.md), for example *NewWebAPILocal*, and open the folder.
+1. Navigate to the web API that was created in [Tutorial: Create an ASP.NET Core project and configure the API](web-api-tutorial-02-prepare-api.md), for example _NewWebAPILocal_, and open the folder.
1. Open a new terminal window and navigate to the folder where the web API project is located.
- ### [.NET 6.0](#tab/dotnet6)
-
- 1. Execute the following command to start the app:
-
- ```bash
- dotnet run
- ```
-
- ### [.NET 7.0](#tab/dotnet7)
-
- 1. Open a new terminal and execute the following command to start the app on the `https` profile:
-
- ```bash
- dotnet run -launch-profile https`
- ```
-
-
-1. An output similar to the following will appear. Record the port number in the `https://localhost:{port}` URL.
+ ### [.NET 6.0](#tab/dotnet6)
+
+ 1. Execute the following command to start the app:
+
+ ```bash
+ dotnet run
+ ```
+
+ ### [.NET 7.0](#tab/dotnet7)
+
+ 1. Open a new terminal and execute the following command to start the app on the `https` profile:
+
+ ```bash
+ dotnet run -launch-profile https`
+ ```
+
+
+
+1. An output similar to the following will appear. Record the port number in the `https://localhost:{port}` URL.
```bash
- ...
+ ...
info: Microsoft.Hosting.Lifetime[14] Now listening on: https://localhost:{port} ...
- ```
+ ```
::: zone-end
-### Configure an authorized request to the web API in Postman
+### Configure an authorized request to the web API in Postman
+
+1. Launch the **Postman** application.
+1. In the main Postman window, find **Create a new request** and select **HTTP Request**.
+1. In the top bar, select **GET** from the dropdown menu.
+1. For the request URL, enter the URL of the endpoint exposed by the web API, `https://localhost:{port}/weatherforecast`.
+1. Select the **Authorization** tab to configure Postman to obtain a token from the Microsoft identity platform that will grant access to the web API.
+1. From the **Type** dropdown, select **OAuth 2.0**. This displays **Configure New Token** form.
+1. Enter the following values in the **Configure New Token** form:
-1. Launch the **Postman** application.
-1. In the main Postman window, find **Create a new request** and select **HTTP Request**.
-1. In the top bar, ensure that **GET** is selected from the dropdown menu.
-1. For the request URL, enter the URL of the endpoint exposed by the web API, `https://localhost:{port}/weatherforecast`.
-1. Select the **Authorization** tab to configure Postman to obtain a token from the Microsoft Identity platform that will grant access to the web API.
-1. Enter the following values in the **Authorization** tab:
+ | Setting | Value |
+ | - | - |
+ | Token Name | Provide any name for the token. For example, enter `Bearer` |
+ | Grant Type | Select **Authorization Code** |
+ | Callback URL | Enter `http://localhost`, which sets the Callback URL to the Redirect URI registered with Azure AD. DO NOT check the **Authorize using browser** checkbox. |
+ | Auth URL | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/authorize` <br/> Replace `{tenantId}` with the **Directory (tenant) ID** |
+ | Access Token URL | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token` <br/> Replace `{tenantId}` with the **Directory (tenant) ID** |
+ | Client ID | The **Application (client) ID** value of your web app registration |
+ | Client Secret | The client secret **Value** of your web app registration |
+ | Scope | `api://{application_client_id}/Forecast.Read` <br/> Navigate to your web app registration, under **Manage**, select **API permissions**, then select **Forecast.Read** <br/> Copy the value in the textbox, which contains the **Scope** value |
- | Setting | Value |
- |--|-|
- | Auth URL | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/authorize` <br/> Replace `{tenantId}` with the **Directory (tenant) ID** |
- | Access Token URL | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token` <br/> Replace `{tenantId}` with the **Directory (tenant) ID** |
- | Client ID | The **Application (client) ID** value of your web app registration |
- | Client Secret | The client secret **Value** of your web app registration |
- | Scope | `api://{application_client_id}/Forecast.Read` <br/> Navigate to your web app registration, under **Manage**, select **API permissions**, then select **Forecast.Read** <br/> Copy the value in the textbox, which contains the **Scope** value |
-
-#### Get an access token and send a request to the web API
+#### Get an access token and send a request to the web API
-1. Once these values are entered select the **Get New Access Token** button. This launches a Postman browser window where you authenticate with your user credentials. Be sure to allow pop ups from the Postman application in the browser.
-1. After authenticating, a new Postman generated pop-up window will appear. Select the **Use Token** button in Postman to provide the access token in the request.
-1. Select **Send** to send the request to the protected web API endpoint.
+1. Once these values are entered select the **Get New Access Token** button. This launches a Postman browser window where you authenticate with your user credentials. Be sure to allow pop ups from the Postman application in the browser.
+1. After authenticating, a new Postman generated pop-up window will appear. Select the **Use Token** button in Postman to provide the access token in the request.
+1. Select **Send** to send the request to the protected web API endpoint.
-With a valid access token included in the request, the expected response is 200 OK with output similar to:
+With a valid access token included in the request, the expected response is 200 OK with output similar to:
```json [
With a valid access token included in the request, the expected response is 200
"temperatureF": 51 } ]
-```
+```
## Next steps
-For more information about OAuth 2.0 authorization code flow and application types, see:
+For more information about OAuth 2.0 authorization code flow and application types, see:
-- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) -- [Application types for the Microsoft identity platform](v2-app-types.md#web-apps)
+- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md)
+- [Application types for the Microsoft identity platform](v2-app-types.md#web-apps)
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
These Microsoft applications support Azure AD's shared device mode:
- [Microsoft Teams](/microsoftteams/platform/) - [Microsoft Managed Home Screen](/mem/intune/apps/app-configuration-managed-home-screen-app) app for Android Enterprise-- [Microsoft Edge](/microsoft-edge) (in Public Preview)-- [Outlook](/mem/intune/apps/app-configuration-policies-outlook) (in Public Preview)-- [Microsoft Power Apps](/power-apps) (in Public Preview)-- [Yammer](/yammer) (in Public Preview)-
-> [!IMPORTANT]
-> Public preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+- [Microsoft Edge](/microsoft-edge)
+- [Outlook](/mem/intune/apps/app-configuration-policies-outlook)
+- [Microsoft Power Apps](/power-apps)
+- [Microsoft Viva Engage](/viva/engage/overview) (previously [Yammer](/yammer))
## Shared device sign-out and the overall app lifecycle
active-directory How To Create Customer Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md
Previously updated : 05/09/2023 Last updated : 05/23/2023
In this article, you learn how to:
:::image type="content" source="media/how-to-create-customer-tenant-portal/add-subscription.png" alt-text="Screenshot that shows the subscription settings."::: 1. Select **Next: Review + Create**. If the information that you entered is correct, select **Create**. The tenant creation process can take up to 30 minutes. You can monitor the progress of the tenant creation process in the **Notifications** pane. Once the customer tenant is created, you can access it in both the Microsoft Entra admin center and the Azure portal.
-1. The tenant creation may take a few minutes to complete. You can monitor the progress by checking the notification bell at the top right corner of the screen. Once the tenant is successfully created, you can navigate to it by selecting the link provided below.
:::image type="content" source="media/how-to-create-customer-tenant-portal/tenant-successfully-created.png" alt-text="Screenshot that shows the link to the new customer tenant.":::
If you're not sure which directory contains your customer tenant, you can find t
:::image type="content" source="media/how-to-create-customer-tenant-portal/directories-subscription.png" alt-text="Screenshot of the Directories + subscriptions icon.":::
-1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**.
-1. On the tenant's home page, select the **Overview** tab. You can find the tenant **Name**, **Tenant ID** and **Primary domain** under **Basic information**.
+1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. This step will bring you to the tenant's home page.
+1. Select the **Overview** tab at the top of the page. You can find the tenant **Name**, **Tenant ID** and **Primary domain** under **Basic information**.
:::image type="content" source="media/how-to-create-customer-tenant-portal/tenant-overview.png" alt-text="Screenshot of the tenant details.":::
active-directory How To Facebook Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md
Previously updated : 04/28/2023 Last updated : 05/24/2023
If you don't already have a Facebook account, sign up at [https://www.facebook.c
1. Enter a URL for the **Terms of Service URL**, for example `https://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **User Data Deletion**, for example `https://www.contoso.com/delete_my_data`. The User Data Deletion URL is a page you maintain to provide away for users to request that their data be deleted. 1. Choose a **Category**, for example `Business and Pages`. Facebook requires this value, but it's not used for Azure AD.
-2. At the bottom of the page, select **Add Platform**, and then select **Website**.
-3. In **Site URL**, enter the address of your website, for example `https://contoso.com`.
-4. Select **Save Changes**.
-5. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**.
-6. From the menu, select **Facebook Login**, select **Settings**.
-7. In **Valid OAuth redirect URIs**, enter:
- - `https://login.microsoftonline.com`
- - `https://login.microsoftonline.com/te/<tenant ID>/oauth2/authresp`. Replace the tenant ID with your Azure AD for customers tenant ID. To find your tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
- - `https://login.microsoftonline.com/te/<tenant name>.onmicrosoft.com/oauth2/authresp`. Replace the tenant name with your Azure AD for customers tenant name.
-8. Select **Save Changes** at the bottom of the page.
-9. To make your Facebook application available to Azure AD, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
+1. At the bottom of the page, select **Add Platform**, and then select **Website**.
+1. In **Site URL**, enter the address of your website, for example `https://contoso.com`.
+1. Select **Save Changes**.
+1. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**.
+1. From the menu, select **Facebook Login**, select **Settings**.
+1. In **Valid OAuth redirect URIs**, enter the following URIs, replacing `<tenant-ID>` with your customer tenant ID and `<tenant-name>` with your customer tenant name:
+ - `https://login.microsoftonline.com/te/<tenant-ID>/oauth2/authresp`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oidc/www.facebook.com`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oidc/www.facebook.com`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oauth2`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2`
+ > [!NOTE]
+ > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
+1. Select **Save Changes** at the bottom of the page.
+1. To make your Facebook application available to Azure AD, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
## Configure Facebook federation in Azure AD for customers
active-directory How To Google Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md
Previously updated : 04/28/2023 Last updated : 05/24/2023
To enable sign-in for customers with a Google account, you need to create an app
1. Under the **Quick access**, or in the left menu, select **APIs & services** and then **OAuth consent screen**. 1. For the **User Type**, select **External** and then select **Create**. 1. On the **OAuth consent screen**, under **App information**
- 1. Enter a **Name** for your application.
- 2. Select a **User support email** address.
-1. Under the **Authorized domains** section, select **Add domain**, and then type *microsoftonline.com*.
+ 1. Enter a **Name** for your application.
+ 1. Select a **User support email** address.
+1. Under the **Authorized domains** section, select **Add domain**, and then add `ciamlogin.com` and `microsoftonline.com`.
1. In the **Developer contact information** section, enter comma separated emails for Google to notify you about any changes to your project. 1. Select **Save and Continue**. 1. From the left menu, select **Credentials** 1. Select **Create credentials**, and then **OAuth client ID**. 1. Under **Application type**, select **Web application**.
- 1. Enter a suitable **Name** for your application, such as "Azure AD for customers."
- 1. For the **Authorized redirect URIs**, enter:
- - `https://login.microsoftonline.com`
- - `https://login.microsoftonline.com/te/<tenant ID>/oauth2/authresp`. Replace the tenant ID with your Azure AD for customers tenant ID. To find your tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
- - `https://login.microsoftonline.com/te/<tenant name>.onmicrosoft.com/oauth2/authresp`. Replace the tenant name with your Azure AD for customers tenant name.
-1. Select **Create**.
-1. Copy the values of **Client ID** and **Client secret**. You need both values to configure Google as an identity provider in your tenant. **Client secret** is an important security credential.
+ 1. Enter a suitable **Name** for your application, such as "Azure AD for customers."
+ 1. In **Valid OAuth redirect URIs**, enter the following URIs, replacing `<tenant-ID>` with your customer tenant ID and `<tenant-name>` with your customer tenant name:
+ - `https://login.microsoftonline.com`
+ - `https://login.microsoftonline.com/te/<tenant-ID>/oauth2/authresp`
+ - `https://login.microsoftonline.com/te/<tenant-name>.onmicrosoft.com/oauth2/authresp`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oidc/accounts.google.com`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oidc/accounts.google.com`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oauth2`
+ - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2`
+ > [!NOTE]
+ > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**.
+2. Select **Create**.
+3. Copy the values of **Client ID** and **Client secret**. You need both values to configure Google as an identity provider in your tenant. **Client secret** is an important security credential.
> [!NOTE] > In some cases, your app might require verification by Google (for example, if you update the application logo). For more information, check out the [Google's verification status guid](https://support.google.com/cloud/answer/10311615#verification-status).
active-directory How To Single Page Application React Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-app.md
Title: Prepare a React Single Page App (SPA) for authentication
-description: Learn how to prepare a React single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant.
+ Title: Prepare a React single-page app (SPA) for authentication
+description: Learn how to prepare a React single-page app (SPA) for authentication with your Azure Active Directory (AD) for customers tenant.
-+ - Last updated 05/23/2023 -
-#Customer intent: As a dev, devops, or IT admin, enable authentication in my own React
+#Customer intent: As a dev, devops, or IT admin, I want to learn how to enable authentication in my own React single-page app
-# Prepare a React Single-page application for authentication
-
-After registration is complete, a React project can be created using an integrated development environment (IDE). This tutorial demonstrates how to create a React Single-page application using npm and create files needed for authentication and authorization.
-
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Create a new React project
-> * Configure the settings for the application
-> * Install identity and bootstrap packages
-> * Add authentication code to the application
+# Prepare a React single-page app (SPA) for authentication
+After registration is complete, you can create a React project using an integrated development environment (IDE). This guide demonstrates how to create a React single-page app using npm and create files needed for authentication and authorization.
## Prerequisites
-* Completion of the prerequisites and steps in [Prepare your customer tenant for building a React Single Page App (SPA)](./how-to-single-page-application-react-prepare-tenant.md))
+* Completion of the prerequisites and steps in [Prepare your customer tenant for building a React single-page app (SPA)](./how-to-single-page-application-react-prepare-tenant.md))
* Although any IDE that supports React applications can be used, Visual Studio Code is used for this guide. This can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads/) page. * [Node.js](https://nodejs.org/en/download/) ## Create a new React project-
-Use the following tabs to create a React project within the IDE.
+Use the following tabs to create a React project within Visual Studio Code.
1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. 1. Open a new terminal by selecting **Terminal** > **New Terminal**.
Use the following tabs to create a React project within the IDE.
``` ## Install identity and bootstrap packages- Identity related **npm** packages must be installed in the project to enable user authentication. For project styling, **Bootstrap** will be used. 1. In the **Terminal** bar, select the **+** icon to create a new terminal. A separate terminal window will open with the previous node terminal continuing to run in the background.
Identity related **npm** packages must be installed in the project to enable use
``` ## Creating the authentication configuration file- 1. In the *src* folder, create a new file called *authConfig.js*. 1. Open *authConfig.js* and add the following code snippet:
Identity related **npm** packages must be installed in the project to enable use
## Modify index.js to include the authentication provider- All parts of the app that require authentication must be wrapped in the [`MsalProvider`](/javascript/api/@azure/msal-react/#@azure-msal-react-msalprovider) component. You instantiate a [PublicClientApplication](/javascript/api/@azure/msal-browser/publicclientapplication) then pass it to `MsalProvider`. 1. In the *src* folder, open *index.js* and replace the contents of the file with the following code snippet to use the `msal` packages and bootstrap styling:
All parts of the app that require authentication must be wrapped in the [`MsalPr
## Next steps > [!div class="nextstepaction"]
-> [Add Sign-in and Sign-out functionality to your app.](./how-to-single-page-application-react-sign-in-out.md)
+> [Add sign-in and sign-out functionality to your app.](./how-to-single-page-application-react-sign-in-out.md)
active-directory How To Single Page Application React Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-tenant.md
Title: Prepare your tenant to use a React single-page app for authentication.
+ Title: Prepare your customer tenant to authenticate users in a React single-page app (SPA)
description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a React single-page app (SPA). -+
Last updated 05/23/2023 -
-#Customer intent: As a dev I want to prepare my customer tenant for building a Single Page App with React
+#Customer intent: As a dev I want to prepare my customer tenant for building a single-page app (SPA) with React
-# Prepare your customer tenant for building a Single Page App (SPA)
+# Prepare your customer tenant to authenticate users in a React single-page app (SPA)
-Before your applications can interact with Microsoft Identity Platform they must be registered in a customer tenant that you manage and must be associated with a user flow.
+Before your applications can interact with Microsoft identity platform they must be registered in a customer tenant that you manage and must be associated with a user flow.
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Register your application and record identifiers.
-> * Create a user flow to allow sign-up and sign-in.
-> * Associate the user flow with your application.
+In this article, you learn how to register your application and record identifies, create a user flow and associate that user flow with your application.
## Prerequisites
If you haven't already created your own customer tenant, [create one now](https:
## Next steps > [!div class="nextstepaction"]
-> [Start building your React Single Page Application](./how-to-single-page-application-react-prepare-app.md)
+> [Start building your React single-page app](./how-to-single-page-application-react-prepare-app.md)
active-directory How To Single Page Application React Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-sample.md
Title: Sign in users in a sample React single-page application
-description: Learn how to configure a sample React SPA to sign in and sign out users.
+description: Learn how to configure a sample React single-page app (SPA) to sign in and sign out users.
- Last updated 05/23/2023-
-#Customer intent: As a dev, devops, I want to learn about how to configure a sample React Single Page Application to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample React single-page app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
-# Sign in users in a sample React single-page application
+# Sign in users in a sample React single-page app (SPA)
-This how-to guide uses a sample React single-page application (SPA) to demonstrate how to add authentication to a SPA. This SPA enables users to sign in and sign out by using you Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication.
+This guide uses a sample React single-page application (SPA) to demonstrate how to add authentication to a SPA. This SPA enables users to sign in and sign out by using you Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication.
## Prerequisites-
-* Although any IDE that supports vanilla JS applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* Although any IDE that supports React applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
* [Node.js](https://nodejs.org/en/download/). * Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl).
If you choose to download the `.zip` file, extract the sample app file to a fold
1. Save the file. ## Run your project and sign in- All the required code snippets have been added, so the application can now be called and tested in a web browser. 1. Open a new terminal by selecting **Terminal** > **New Terminal**.
All the required code snippets have been added, so the application can now be ca
1. Once signed in the display name is shown next to the **Sign out** button. ## Next steps-
-Learn how to use the Microsoft Authentication Library (MSAL) for JavaScript to sign in users and acquire tokens to call Microsoft Graph.
+> [!div class="nextstepaction"]
+> [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory How To Single Page Application React Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-sign-in-out.md
Title: Sign in users with a React single-page-application
+ Title: Add sign-in to a React single-page app (SPA)
description: Learn how to configure a React single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant.
Last updated 05/23/2023
-#Customer intent: As a developer I want to add sign-in and sign-out functionality to my React Single Page App
+#Customer intent: As a developer I want to add sign-in and sign-out functionality to my React single-page app
-# Create components for sign in and sign out in a React single page app
-
+# Add sign-in to a React single-page app (SPA)
Functional components are the building blocks of React apps. This tutorial demonstrates how functional components can be used to build the sign in and sign out experience in a React single-page app (SPA). The `useMsal` hook is used to retrieve an access token to allow user sign in.
-In this tutorial:
-
-> [!div class="checklist"]
->
-> - Add components to the application
-> - Create a way of displaying the user's profile information
-> - Create a layout that displays the sign in and sign out experience
-> - Add the sign in and sign out experiences
+In this article you will add components to the application and create a layout that displays the sign in and sign out experience. You will also add sign in and sign out experiences.
## Prerequisites
-* Completion of the prerequisites and steps in [Prepare an Single Page Application for authentication](how-to-single-page-application-react-prepare-app.md).
+* Completion of the prerequisites and steps in [Prepare an single-page app for authentication](how-to-single-page-application-react-prepare-app.md).
## Adding components to the application
In this tutorial:
- *SignInButton.jsx* - *SignOutButton.jsx* -- Once complete, you should have the following folder structure. ```txt
reactspalocal/
``` ### Adding the page layout- 1. Open *PageLayout.jsx* and add the following code to render the page layout. The [useIsAuthenticated](/javascript/api/@azure/msal-react) hook returns whether or not a user is currently signed-in. ```javascript
reactspalocal/
1. Save the file. ### Adding the sign in experience- 1. Open *SignInButton.jsx* and add the following code, which creates a button that signs in the user using either a pop-up or redirect. ```javascript
reactspalocal/
1. Save the file. ### Adding the sign out experience- 1. Open *SignOutButton.jsx* and add the following code, which creates a button that signs out the user using either a pop-up or redirect. ```javascript
reactspalocal/
``` ## Change filename and add required imports- By default, the application runs via a JavaScript file called *App.js*. It needs to be renamed to *App.jsx*, which is an extension that allows a developer to write HTML in React. 1. Rename App.js to App.jsx.
By default, the application runs via a JavaScript file called *App.js*. It needs
``` ### Replacing the default function to render authenticated information- The following code will render based on whether the user is authenticated or not. Replace the default function `App()` to render authenticated information with the following code: ```javascript
export default function App() {
``` ## Run your project and sign in- All the required code snippets have been added, so the application can now be called and tested in a web browser. 1. Open a new terminal by selecting **Terminal** > **New Terminal**.
All the required code snippets have been added, so the application can now be ca
npm start ```
-1. Open a web browser and navigate to the port specified in [Prepare a Single-page application for authentication](./how-to-single-page-application-react-prepare-app.md). For example, http://localhost:3000/.
+1. Open a web browser and navigate to the port specified in [Prepare a single-page application for authentication](./how-to-single-page-application-react-prepare-app.md). For example, http://localhost:3000/.
1. For the purposes of this how-to, choose the **Sign in using Popup** option. 1. After the popup window appears with the sign-in options, select the account with which to sign-in. 1. A second window may appear indicating that a code will be sent to your email address. If this happens, select **Send code**. Open the email from the sender Microsoft account team, and enter the 7-digit single-use code. Once entered, select **Sign in**.
All the required code snippets have been added, so the application can now be ca
1. The app will now ask for permission to sign-in and access data. Select **Accept** to continue. ## Sign out of the application- 1. To sign out of the application, select **Sign out** in the navigation bar. 1. A window appears asking which account to sign out of. 1. Upon successful sign out, a final window appears advising you to close all browser windows. ## Next steps- > [!div class="nextstepaction"] > [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory How To Manage User Profile Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-user-profile-info.md
Previously updated : 03/28/2023 Last updated : 05/24/2023
In the **User settings** area of Azure AD, you can adjust several settings that
Go to **Azure AD** > **User settings**.
-![Screenshot of the Azure AD user settings options.](media/how-to-manage-user-profile-info/user-settings-options.png)
+[ ![Screenshot of the Azure AD user settings options.](media/how-to-manage-user-profile-info/user-settings.png) ](media/how-to-manage-user-profile-info/user-settings.png#lightbox)
The following settings can be managed from Azure AD **User settings**. -- Manage how end users launch and view their applications - Allow users to register their own applications-- [Prevent non-admins from creating their own tenants](users-default-permissions.md#restrict-member-users-default-permissions)
+- Prevent non-admins from creating their own tenants
+ - For more information, see [default user permissions](users-default-permissions.md#restrict-member-users-default-permissions)
+- Allow users to create security groups
+- Guest user access restrictions
+ - Guest users have the same access as members (most inclusive)
+ - Guest users have limited access to properties and memberships of directory objects
+ - Guest user access is restricted to properties and memberships of their own directory objects (most restrictive)
- Restrict access to the Azure AD administration portal - [Allow users to connect their work or school account with LinkedIn](../enterprise-users/linkedin-user-consent.md) - [Enable the "Stay signed in?" prompt](how-to-manage-stay-signed-in-prompt.md)
The following settings can be managed from Azure AD **User settings**.
- [External user leave settings](../external-identities/self-service-sign-up-user-flow.md#enable-self-service-sign-up-for-your-tenant) - Collaboration restrictions - Manage user feature settings
+ - Users can use preview features for My Apps
+ - Administrators can access My Staff
## Next steps
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
To access the Azure AD reporting API, you must grant your app *Read directory da
![Screenshot of the API permissions menu option and Add permissions button.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions-new-permission.png) 1. Select **Microsoft Graph** > **Application permissions**.
-1. Add **Directory.ReadAll** and **AuditLog.Read.All**, then select the **Add permissions** button.
+1. Add **Directory.Read.All**, **AuditLog.Read.All** and **Policy.Read.ConditionalAccess** then select the **Add permissions** button.
- If you need more permissions to run the queries you need, you can add them now or modify the permissions as needed in Microsoft Graph. - For more information, see [Work with Graph Explorer](/graph/graph-explorer/graph-explorer-features).
active-directory Recommendation Remove Unused Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-apps.md
Previously updated : 03/07/2023 Last updated : 05/24/2023
Take note of the following common scenarios or known limitations of the "Remove
- App proxy - Add-in apps
-* This recommendation currently surfaces applications that were created within the past 30 days *and* shows as unused. Updates to the recommendation to filter out recently created apps so that they can complete a full cycle are in progress.
+* The current unused app processor identifies any apps that were created recently. In some instances newly created apps might need more time to deploy the code that uses the application registration. Progress is underway to filter out apps that were created within the past 60 days so they don't show as unused.
## Next steps
active-directory Parallels Desktop Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parallels-desktop-tutorial.md
Previously updated : 02/23/2023 Last updated : 05/25/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://account.parallels.com/webapp/sso/acs/<ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Please note the Identifier and Reply URL values are customer specific and should be able to specify it manually by copying it from Parallels My Account to the identity provider Azure. Contact [Parallels Desktop Client support team](mailto:parallels.desktop.sso@alludo.com) for any help. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Please note the Identifier and Reply URL values are customer specific and should be able to specify it manually by copying it from Parallels My Account to the identity provider Azure. Contact [Parallels Desktop support team](https://www.parallels.com/support/) for any help. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
c. In the **Sign on URL** textbox, type the URL:- `https://my.parallels.com/login?sso=1`
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Parallels Desktop SSO
-To configure single sign-on on **Parallels Desktop** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Parallels Desktop** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Parallels Desktop support team](https://www.parallels.com/support/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Parallels Desktop test user
-In this section, you create a user called Britta Simon at Parallels Desktop. Work with [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com) to add the users in the Parallels Desktop platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon at Parallels Desktop. Work with [Parallels Desktop support team](https://www.parallels.com/support/) to add the users in the Parallels Desktop platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
Previously updated : 03/29/2023 Last updated : 05/25/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
| Reply URL| |-| | `https://<instancename>.service-now.com/navpage.do` |
- | `https://<instancename>.service-now.com/customer.do` |
+ | `https://<instancename>.service-now.com/consumer.do` |
| d. In **Logout URL**, enter a URL that uses the following pattern:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
| Sign on URL | |--| | `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>` |
- | `https://<instancename>.service-now.com/customer.do` |
+ | `https://<instancename>.service-now.com/consumer.do` |
| b. For **Identifier (Entity ID)**, enter a URL that uses the following pattern:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
| Reply URL | |--| | `https://<instancename>.service-now.com/navpage.do` |
- | `https://<instancename>.service-now.com/customer.do` |
+ | `https://<instancename>.service-now.com/consumer.do` |
| d. In **Logout URL**, enter a URL that uses the following pattern:
active-directory Superannotate Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/superannotate-tutorial.md
Previously updated : 03/13/2023 Last updated : 05/25/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
| | | | groups | user.groups [ApplicationGroup] |
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, copy the **App Federation Metadata Url** or download the **Federation Metadata XML** and save it on your computer.
![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ## Configure SuperAnnotate SSO
-To configure single sign-on on **SuperAnnotate** side, you need to send the **App Federation Metadata Url** to [SuperAnnotate support team](mailto:support@superannotate.com). They set this setting to have the SAML SSO connection set properly on both sides
+To configure single sign-on on **SuperAnnotate** side, you need to set up the copied **App Federation Metadata Url** or the downloaded **Federation Metadata XML** in the SSO setup page of the SuperAnnotate side to have the SAML SSO connection set properly on both sides.
### Create SuperAnnotate test user
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
In some cases recommendations can't be adopted or might not be applicable, such
In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
+### Limitations
+- The savings associated with the recommendations are based on retail rates and don't take into account any temporary or long-term discounts that might apply to your account. As a result, the listed savings might be higher than actually possible.
+- The recommendations don't take into account the presence of Reserved Instances (RI) / Savings plan purchases. As a result, the listed savings might be higher than actually possible. In some cases, for example in the case of cross-series recommendations, depending on the types of SKUs that reserved instances have been purchased for, the costs might increase when the optimization recommendations are followed. We caution you to consider your RI/Savings plan purchases when you act on the right-size recommendations.
+ We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback). ## Next steps
advisor Advisor High Availability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-high-availability-recommendations.md
Azure Advisor checks for any VPN gateways that use a Basic SKU and recommends th
- Higher stability and availability. ## Ensure reliable outbound connectivity with VNet NAT
-Using default outbound connecitivty provided by a Standard Load Balancer or other Azure resources is not recommended for production workloads as this causes connection failures (also called SNAT port exhaustion). The recommended approach is using a VNet NAT which will prevent any failures of connectivty in this regard. NAT can scale seamlessly to ensure your application is never out ports. [Learn more about VNet NAT](../virtual-network/nat-gateway/nat-overview.md).
+Using default outbound connecitivty provided by a Standard Load Balancer or other Azure resources is not recommended for production workloads as this causes connection failures (also called SNAT port exhaustion). The recommended approach is using a VNet NAT which will prevent any failures of connectivity in this regard. NAT can scale seamlessly to ensure your application is never out ports. [Learn more about VNet NAT](../virtual-network/nat-gateway/nat-overview.md).
## Ensure virtual machine fault tolerance (temporarily disabled)
advisor Advisor How To Plan Migration Workloads Service Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-plan-migration-workloads-service-retirement.md
Last updated 05/19/2023
# Prepare migration of your workloads impacted by service retirement
-Azure Advisor helps you assess and improve the continuity of your business-critical applications. It's important to be aware of upcoming Azure products and feature retirements to understand their impact on your workloads and plan migration.
+Azure Advisor helps you assess and improve the continuity of your business-critical applications. It's important to be aware of upcoming Azure services and feature retirements to understand their impact on your workloads and plan migration.
## Service Retirement workbook
-The Service Retirement workbook provides a single centralized resource level view of product retirements. It helps you assess impact, evaluate options, and plan for migration from retiring products and features. The workbook template is available in Azure Advisor gallery.
+The Service Retirement workbook provides a single centralized resource level view of service retirements. It helps you assess impact, evaluate options, and plan for migration from retiring services and features. The workbook template is available in Azure Advisor gallery.
Here's how to get started: 1. Navigate to [Workbooks gallery](https://aka.ms/advisorworkbooks) in Azure Advisor
The workbook shows a list and a map view of service retirements that impact your
:::image type="content" source="media/advisor-service-retirement-workbook-details.png" alt-text="Screenshot of the Azure Advisor service retirement workbook template, detailed view." lightbox="media/advisor-service-retirement-workbook-details.png"::: > [!NOTE]
-> The workbook contains information about a subset of products and features that are in the retirement lifecycle. While we continue to add more services to this workbook, you can view the lifecycle status of all Azure products and services by visiting [Azure updates](https://azure.microsoft.com/updates/?updateType=retirements).
+> The workbook contains information about a subset of services and features that are in the retirement lifecycle. While we continue to add more services to this workbook, you can view the lifecycle status of all Azure services by visiting [Azure updates](https://azure.microsoft.com/updates/?updateType=retirements).
-For more information about Advisor recommendations, see:
-* [Introduction to Advisor](advisor-overview.md)
+For more information, see:
* [Azure Service Health](../service-health/overview.md) * [Azure updates](https://azure.microsoft.com/updates/?updateType=retirements)
+* [Introduction to Advisor](advisor-overview.md)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [MariaDB server - OrcasMariaDbStorageLimit (Scale the storage l
Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
-Learn more about [MariaDB server - OrcasMariaDbCpuOverlaod (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing).
+Learn more about [MariaDB server - OrcasMariaDbCpuOverload (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing).
### Scale the MariaDB server to higher SKU
Learn more about [Database Instance - DisableIPv6Protocol (For improved file sys
The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections which were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections
-Learn more about [Database Instance - ParamterSlowStart (To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle)](https://launchpad.support.sap.com/#/notes/3024346).
+Learn more about [Database Instance - ParameterSlowStart (To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle)](https://launchpad.support.sap.com/#/notes/3024346).
### For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter
Enable the tcp_sack parameter as per SAP note: 302436. This configuration certif
Learn more about [Database Instance - TCPSackParameter (For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
-### In high-availaility scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter
+### In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter
Disable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads
-Learn more about [Database Instance - DisableTCPTimestamps (In high-availaility scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+Learn more about [Database Instance - DisableTCPTimestamps (In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
### For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
Last updated 05/17/2023
# Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure file share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. You can use a persistent volume with one or many pods, and it can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure file share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
This article shows you how to:
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin -- An Azure [storage account][azure-storage-account].--- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* You need an Azure [storage account][azure-storage-account].
+* Make sure you have Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* When choosing between standard and premium file shares, it's important you understand the provisioning model and requirements of the expected usage pattern you plan to run on Azure Files. For more information, see [Choosing an Azure Files performance tier based on usage patterns][azure-files-usage].
## Dynamically provision a volume
-This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of one or more shares on Azure Files for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Files file share.
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of one or more shares on Azure Files. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Files file share.
### Dynamic provisioning parameters
This section provides guidance for cluster administrators who want to provision
| | | | | |skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.| |protocol | Specify file share protocol. | `smb`, `nfs` | No | `smb` |
-|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
-|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
+|location | Specify the Azure region of the Azure storage account.| For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
+|resourceGroup | Specify the resource group for the Azure Disks.| Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
+|shareName | Specify Azure file share name. | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
-|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. |
+|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name doesn't exist in file share, the mount fails. |
|shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.| |accountAccessTier | [Access tier for storage account][access-tiers-overview] | Standard account can choose `Hot` or `Cool`, and Premium account can only choose `Premium`. | No | Empty. Use default setting for different storage account types. |
-|server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
+|server | Specify Azure storage account server address. | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` | |allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |
-|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint will be created for the storage account. For other cases, a service endpoint will be created by default. | "",`privateEndpoint`| No | "" |
+|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint is created for the storage account. For other cases, a service endpoint is created by default. | "",`privateEndpoint`| No | "" |
|requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` | |storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. | |tags | [Tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" | |matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` | | | **Following parameters are only for SMB protocol** | | | |subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
-|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
+|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver uses kubelet identity to get account key. | No | `true` |
|secretName | Specify secret name to store account key. | | No |
-|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` |
-|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
+|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc. | No | PVC namespace, for example `csi.storage.k8s.io/pvc/namespace` |
+|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize, which could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there's firewall or Vnet settings on storage account. | `true` or `false` | No | `false` |
| | **Following parameters are only for NFS protocol** | | | |rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No | |mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
This section provides guidance for cluster administrators who want to provision
|vnetResourceGroup | Specify VNet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. | |vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. | |subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
-|fsGroupChangePolicy | Indicates how volume's ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
+|fsGroupChangePolicy | Indicates how the driver changes volume's ownership. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
### Create a storage class
-A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files file share. Choose of the following [Azure storage redundancy][storage-skus] for *skuName*:
+Storage classes define how to create an Azure file share. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files file share. Choose of the following [Azure storage redundancy SKUs][storage-skus] for `skuName`:
-* *Standard_LRS* - standard locally redundant storage (LRS)
-* *Standard_GRS* - standard geo-redundant storage (GRS)
-* *Standard_ZRS* - standard zone redundant storage (ZRS)
-* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
-* *Premium_LRS* - premium locally redundant storage (LRS)
-* *Premium_ZRS* - premium zone redundant storage (ZRS)
+* `Standard_LRS`: Standard locally redundant storage (LRS)
+* `Standard_GRS`: Standard geo-redundant storage (GRS)
+* `Standard_ZRS`: Standard zone redundant storage (ZRS)
+* `Standard_RAGRS`: Standard read-access geo-redundant storage (RA-GRS)
+* `Premium_LRS`: Premium locally redundant storage (LRS)
+* `Premium_ZRS`: pPremium zone redundant storage (ZRS)
> [!NOTE] > Minimum premium file share is 100GB. For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
-Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on *mountOptions*, see the [Mount options][mount-options] section.
+1. Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on `mountOptions`, see the [Mount options][mount-options] section.
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: my-azurefile
-provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
-allowVolumeExpansion: true
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - actimeo=30
-parameters:
- skuName: Premium_LRS
-```
+ ```yaml
+ kind: StorageClass
+ apiVersion: storage.k8s.io/v1
+ metadata:
+ name: my-azurefile
+ provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+ allowVolumeExpansion: true
+ mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - actimeo=30
+ parameters:
+ skuName: Premium_LRS
+ ```
-Create the storage class with the [kubectl apply][kubectl-apply] command:
+2. Create the storage class using the [`kubectl apply`][kubectl-apply] command.
-```bash
-kubectl apply -f azure-file-sc.yaml
-```
+ ```bash
+ kubectl apply -f azure-file-sc.yaml
+ ```
### Create a persistent volume claim
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. You can use the following YAML to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see [Kubernetes persistent volume][access-modes].
-Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
+1. Create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure the `storageClassName` matches the storage class you created in the previous step.
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: my-azurefile
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: my-azurefile
- resources:
- requests:
- storage: 100Gi
-```
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: my-azurefile
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: my-azurefile
+ resources:
+ requests:
+ storage: 100Gi
+ ```
-> [!NOTE]
-> If using the *Premium_LRS* sku for your storage class, the minimum value for *storage* must be *100Gi*.
+ > [!NOTE]
+ > If using the `Premium_LRS` SKU for your storage class, the minimum value for `storage` must be `100Gi`.
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
+2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command.
-```bash
-kubectl apply -f azure-file-pvc.yaml
-```
+ ```bash
+ kubectl apply -f azure-file-pvc.yaml
+ ```
-Once completed, the file share will be created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
+ Once completed, the file share is created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [`kubectl get`][kubectl-get] command to view the status of the PVC:
-```bash
-kubectl get pvc my-azurefile
-```
+ ```bash
+ kubectl get pvc my-azurefile
+ ```
-The output of the command resembles the following example:
+ The output of the command resembles the following example:
-```output
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
-```
+ ```output
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
+ ```
### Use the persistent volume
-The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure Files file share at the */mnt/azure* path. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure Files file share at the */mnt/azure* path. For Windows Server containers, specify a `mountPath` using the Windows path convention, such as *'D:'*.
-Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure that the *claimName* matches the PVC created in the last step.
+1. Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure the `claimName` matches the PVC you created in the previous step.
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
-```
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
+ ```
-Create the pod with the [kubectl apply][kubectl-apply] command.
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
-```bash
-kubectl apply -f azure-pvc-files.yaml
-```
+ ```bash
+ kubectl apply -f azure-pvc-files.yaml
+ ```
-You now have a running pod with your Azure Files file share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command. The following condensed example output shows the volume mounted in the container:
-
-```console
-Containers:
- mypod:
- Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
- Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- State: Running
- Started: Fri, 01 Mar 2019 23:56:16 +0000
- Ready: True
- Mounts:
- /mnt/azure from volume (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: my-azurefile
- ReadOnly: false
-[...]
-```
+ You now have a running pod with your Azure Files file share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod using the [`kubectl describe`][kubectl-describe] command. The following condensed example output shows the volume mounted in the container.
+
+ ```output
+ Containers:
+ mypod:
+ Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
+ Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
+ State: Running
+ Started: Fri, 01 Mar 2019 23:56:16 +0000
+ Ready: True
+ Mounts:
+ /mnt/azure from volume (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
+ [...]
+ Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: my-azurefile
+ ReadOnly: false
+ [...]
+ ```
### Mount options
-The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.13.0 and above. If dynamically creating the persistent volume with a storage class, mount options can be specified on the storage class object. For more information, see [Mount options](https://kubernetes.io/docs/concepts/storage/storage-classes/#mount-options). The following example sets *0777*:
+The default value for `fileMode` and `dirMode` is *0777* for Kubernetes versions 1.13.0 and above. If you're dynamically creating the persistent volume with a storage class, you can specify mount options on the storage class object. For more information, see [Mount options](https://kubernetes.io/docs/concepts/storage/storage-classes/#mount-options). The following example sets *0777*:
```yaml kind: StorageClass
parameters:
### Using Azure tags
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
## Statically provision a volume
This section provides guidance for cluster administrators who want to create one
|nodeStageSecretRef.name | Specify a secret name that stores storage account name and key. | Existing secret name | Yes || |nodeStageSecretRef.namespace | Specify a secret namespace. | Kubernetes namespace | Yes || | | **Following parameters are only for NFS protocol** | | | |
-|volumeAttributes.fsGroupChangePolicy | Indicates how a volumes ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch` |
+|volumeAttributes.fsGroupChangePolicy | Indicates how the driver changes a volume's ownership. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch` |
|volumeAttributes.mountPermissions | Specify mounted folder permissions. The default is `0777` | | No || ### Create an Azure file share
-Before you can use an Azure Files file share as a Kubernetes volume, you must create an Azure Storage account and the file share. In this article, you'll create the storage container in the node resource group.
+Before you can use an Azure Files file share as a Kubernetes volume, you must create an Azure Storage account and the file share.
-1. Get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**.
+1. Get the resource group name using the [`az aks show`][az-aks-show] command with the `--query nodeResourceGroup` parameter.
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
Before you can use an Azure Files file share as a Kubernetes volume, you must cr
MC_myResourceGroup_myAKSCluster_eastus ```
-2. The following command creates a storage account using the Standard_LRS SKU. Replace the following placeholders:
+2. Create a storage account using the [`az storage account create`][az-storage-account-create] command with the `--sku` parameter. The following command creates a storage account using the `Standard_LRS` SKU. Make sure to replace the following placeholders:
* `myAKSStorageAccount` with the name of the storage account * `nodeResourceGroupName` with the name of the resource group that the AKS cluster nodes are hosted in
- * `location` with the name of the region to create the resource in. It should be the same region as the AKS cluster nodes.
+ * `location` with the name of the region to create the resource in. It should be the same region as the AKS cluster nodes.
```azurecli-interactive az storage account create -n myAKSStorageAccount -g nodeResourceGroupName -l location --sku Standard_LRS ```
-3. Run the following command to export the connection string as an environment variable. This is used when creating the Azure file share in a later step.
+3. Export the connection string as an environment variable using the following command, which you use to create the file share.
```azurecli-interactive export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n storageAccountName -g resourceGroupName -o tsv) ```
-4. Create the file share using the [Az storage share create][az-storage-share-create] command. Replace the placeholder `shareName` with a name you want to use for the share.
+4. Create the file share using the [`az storage share create`][az-storage-share-create] command. Make sure to replace `shareName` with your share name.
```azurecli-interactive az storage share create -n shareName --connection-string $AZURE_STORAGE_CONNECTION_STRING ```
-5. Run the following command to export the storage account key as an environment variable.
+5. Export the storage account key as an environment variable using the following command.
```azurecli-interactive STORAGE_KEY=$(az storage account keys list --resource-group nodeResourceGroupName --account-name myAKSStorageAccount --query "[0].value" -o tsv) ```
-6. Run the following commands to echo the storage account name and key. Copy this information as these values are needed when you create the Kubernetes volume later in this article.
+6. Echo the storage account name and key using the following command. Copy this information, as you need these values when creating the Kubernetes volume.
```azurecli-interactive echo Storage account key: $STORAGE_KEY
Before you can use an Azure Files file share as a Kubernetes volume, you must cr
Kubernetes needs credentials to access the file share created in the previous step. These credentials are stored in a [Kubernetes secret][kubernetes-secret], which is referenced when you create a Kubernetes pod.
-Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
+1. Create the secret using the `kubectl create secret` command. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
-```bash
-kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=myAKSStorageAccount --from-literal=azurestorageaccountkey=$STORAGE_KEY
-```
+ ```bash
+ kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=myAKSStorageAccount --from-literal=azurestorageaccountkey=$STORAGE_KEY
+ ```
### Mount file share as an inline volume > [!NOTE]
-> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
+> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, instead use the [persistent volume example][persistent-volume-example].
-To mount the Azure Files file share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the file share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+To mount the Azure Files file share into your pod, you configure the volume in the container spec.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeAttributes:
- secretName: azure-secret # required
- shareName: aksshare # required
- mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
-```
+1. Create a new file named `azure-files-pod.yaml` and copy in the following contents. If you changed the name of the file share or secret name, update the `shareName` and `secretName`. You can also update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a `mountPath` using the Windows path convention, such as *'D:'*.
-Use the [kubectl apply][kubectl-apply] command to create the pod.
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mypod
+ spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeAttributes:
+ secretName: azure-secret # required
+ shareName: aksshare # required
+ mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
+ ```
-```bash
-kubectl apply -f azure-files-pod.yaml
-```
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
-You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [kubectl describe][kubectl-describe] command:
+ ```bash
+ kubectl apply -f azure-files-pod.yaml
+ ```
-```bash
-kubectl describe pod mypod
-```
+ You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [`kubectl describe`][kubectl-describe] command.
-### Mount file share as a persistent volume
+ ```bash
+ kubectl describe pod mypod
+ ```
-The following example demonstrates how to mount a file share as a persistent volume.
+### Mount file share as a persistent volume
-1. Create a file named `azurefiles-pv.yaml` and copy in the following YAML. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for *fileMode* and *dirMode* is *0777*.
+1. Create a new file named `azurefiles-pv.yaml` and copy in the following contents. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for `fileMode` and `dirMode` is *0777*.
```yaml apiVersion: v1
The following example demonstrates how to mount a file share as a persistent vol
- nobrl ```
-2. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+2. Create the persistent volume using the [`kubectl create`][kubectl-create] command.
```bash kubectl create -f azurefiles-pv.yaml ```
-3. Create a *azurefiles-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* and copy the following YAML.
+3. Create a new file named *azurefiles-mount-options-pvc.yaml* and copy the following contents.
```yaml apiVersion: v1
The following example demonstrates how to mount a file share as a persistent vol
storage: 5Gi ```
-4. Use the `kubectl` commands to create the *PersistentVolumeClaim*.
+4. Create the PersistentVolumeClaim using the [`kubectl apply`][kubectl-apply] command.
-```bash
-kubectl apply -f azurefiles-mount-options-pvc.yaml
-```
+ ```bash
+ kubectl apply -f azurefiles-mount-options-pvc.yaml
+ ```
-5. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* by running the following command.
+5. Verify your PersistentVolumeClaim is created and bound to the PersistentVolume using the [`kubectl get`][kubectl-get] command.
```bash kubectl get pvc azurefile
kubectl apply -f azurefiles-mount-options-pvc.yaml
azurefile Bound azurefile 5Gi RWX azurefile 5s ```
-6. Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example:
+6. Update your container spec to reference your *PersistentVolumeClaim* and your pod in the YAML file. For example:
```yaml ...
kubectl apply -f azurefiles-mount-options-pvc.yaml
claimName: azurefile ```
-7. Because a pod spec can't be updated in place, use [kubectl delete][kubectl-delete] and [kubectl apply][kubectl-apply] commands to delete and then re-create the pod:
+7. A pod spec can't be updated in place, so delete the pod using the [`kubectl delete`][kubectl-delete] command and recreate it using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl delete pod mypod
For associated best practices, see [Best practices for storage and backups in AK
[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview [CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share [kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
-[kubernetes-persistent-volume]: https://kubernetes.io/docs/concepts/storage/persistent-volumes
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
For associated best practices, see [Best practices for storage and backups in AK
[storage-tiers]: ../storage/files/storage-files-planning.md#storage-tiers [access-tiers-overview]: ../storage/blobs/access-tiers-overview.md [tag-resources]: ../azure-resource-manager/management/tag-resources.md
+[azure-files-usage]: ../storage/files/understand-performance.md#choosing-a-performance-tier-based-on-usage-patterns
+[az-storage-account-create]: /cli/azure/storage/account#az-storage-account-create
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
# Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver
-The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides a variety of methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
+The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides various methods of identity-based access to your Azure key vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster. For more information, see [Use the Secrets Store CSI Driver][csi-secrets-store-driver].
Currently, the following methods of access are available: -- Azure AD Workload identity (preview)
+- Azure AD Workload identity
- User-assigned managed identity
-## Access with an Azure AD workload identity (preview)
+## Access with an Azure AD workload identity
-An [Azure AD workload identity][workload-identity] is an identity used by an application running on a pod that can authenticate itself against other Azure services that support it, such as Storage or SQL. It integrates with the capabilities native to Kubernetes to federate with external identity providers. In this security model, the AKS cluster acts as token issuer where Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
+An [Azure AD workload identity][workload-identity] is an identity that an application running on a pod uses that authenticates itself against other Azure services that support it, such as Storage or SQL. It integrates with the native Kubernetes capabilities to federate with external identity providers. In this security model, the AKS cluster acts as token issuer. Azure Active Directory (Azure AD) then uses OpenID Connect (OIDC) to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
> [!NOTE] > This authentication method replaces Azure AD pod-managed identity (preview). The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022. ### Prerequisites -- Installed the latest version of the `aks-preview` extension, version 0.5.102 or later. To learn more, see [How to install extensions][how-to-install-extensions].-- Existing Keyvault-- Existing Azure Subscription with EnableWorkloadIdentityPreview feature enabled-- Existing AKS cluster with enable-oidc-issuer and enable-workload-identity enabled
+Before you begin, you must have the following prerequisites in place:
-Azure AD workload identity (preview) is supported on both Windows and Linux clusters.
+- An existing Key Vault.
+- An active Azure Subscription.
+- An existing AKS cluster with `enable-oidc-issuer` and `enable-workload-identity` enabled
+
+Azure AD workload identity is supported on both Windows and Linux clusters.
### Configure workload identity
-1. Use the Azure CLI `az account set` command to set a specific subscription to be the current active subscription. Then use the `az identity create` command to create a managed identity.
+1. Set your subscription using the [`az account set`][az-account-set] command.
```azurecli-interactive export SUBSCRIPTION_ID=<subscription id>
Azure AD workload identity (preview) is supported on both Windows and Linux clus
export CLUSTER_NAME=<aks cluster name> az account set --subscription $SUBSCRIPTION_ID
+ ```
+
+2. Create a managed identity using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
az identity create --name $UAMI --resource-group $RESOURCE_GROUP export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $RESOURCE_GROUP --name $UAMI --query 'clientId' -o tsv)" export IDENTITY_TENANT=$(az aks show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --query identity.tenantId -o tsv) ```
-2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
+3. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. Assign these rights using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
```azurecli-interactive az keyvault set-policy -n $KEYVAULT_NAME --key-permissions get --spn $USER_ASSIGNED_CLIENT_ID
Azure AD workload identity (preview) is supported on both Windows and Linux clus
az keyvault set-policy -n $KEYVAULT_NAME --certificate-permissions get --spn $USER_ASSIGNED_CLIENT_ID ```
-3. Run the [az aks show][az-aks-show] command to get the AKS cluster OIDC issuer URL.
+4. Get the AKS cluster OIDC Issuer URL using the [`az aks show`][az-aks-show] command.
```bash export AKS_OIDC_ISSUER="$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query "oidcIssuerProfile.issuerUrl" -o tsv)" echo $AKS_OIDC_ISSUER ```
- > [!NOTE]
- > If the URL is empty, verify you have installed the latest version of the `aks-preview` extension, version 0.5.102 or later. Also verify you've [enabled the
- > OIDC issuer][enable-oidc-issuer] (preview).
-
-4. Establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application. Update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+5. You need to establish a federated identity credential between the Azure AD application and the service account issuer and subject. Get the object ID of the Azure AD application using the following commands. Make sure to update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" # sample name; can be changed
Azure AD workload identity (preview) is supported on both Windows and Linux clus
EOF ```
- Next, use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the Managed Identity, the service account issuer, and the subject.
+6. Create the federated identity credential between the managed identity, service account issuer, and subject using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
```bash export FEDERATED_IDENTITY_NAME="aksfederatedidentity" # can be changed as needed az identity federated-credential create --name $FEDERATED_IDENTITY_NAME --identity-name $UAMI --resource-group $RESOURCE_GROUP --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} ```
-5. Deploy a `SecretProviderClass` by using the following YAML script, noticing that the variables will be interpolated:
+
+7. Deploy a `SecretProviderClass` using the `kubectl apply` command and the following YAML script.
```bash cat <<EOF | kubectl apply -f -
Azure AD workload identity (preview) is supported on both Windows and Linux clus
> [!NOTE] > If you use `objectAlias` instead of `objectName`, make sure to update the YAML script.
-6. Deploy a sample pod. Notice the service account reference in the pod definition:
+8. Deploy a sample pod using the `kubectl apply` command and the following YAML script.
```bash cat <<EOF | kubectl apply -f -
Azure AD workload identity (preview) is supported on both Windows and Linux clus
## Access with a user-assigned managed identity
-1. To access your key vault, you can use the user-assigned managed identity that you created when you [enabled a managed identity on your AKS cluster][use-managed-identity]:
+1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity you created when you [enabled a managed identity on your AKS cluster][use-managed-identity].
```azurecli-interactive az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv ```
- Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set:
+ Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands.
```azurecli-interactive az identity create -g <resource-group> -n <identity-name>
Azure AD workload identity (preview) is supported on both Windows and Linux clus
az vm identity assign -g <resource-group> -n <agent-pool-vm> --identities <identity-resource-id> ```
-1. To grant your identity permissions that enable it to read your key vault and view its contents, run the following commands:
+2. Grant your identity the permissions that enable it to read and view the contents of your key vault using the following [`az keyvault set-policy`][az-keyvault-set-policy] commands for each object type.
```azurecli-interactive
- # set policy to access keys in your key vault
+ # Set policy to access keys in your key vault
az keyvault set-policy -n <keyvault-name> --key-permissions get --spn <identity-client-id>
- # set policy to access secrets in your key vault
+
+ # Set policy to access secrets in your key vault
az keyvault set-policy -n <keyvault-name> --secret-permissions get --spn <identity-client-id>
- # set policy to access certs in your key vault
+
+ # Set policy to access certs in your key vault
az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id> ```
-1. Create a `SecretProviderClass` by using the following YAML, using your own values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault:
+3. Create a `SecretProviderClass` using the following YAML. Make sure to use your own values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault.
```yml # This is a SecretProviderClass example using user-assigned identity to access your key vault
Azure AD workload identity (preview) is supported on both Windows and Linux clus
> [!NOTE] > If you use `objectAlias` instead of `objectName`, make sure to update the YAML script.
-1. Apply the `SecretProviderClass` to your cluster:
+4. Apply the `SecretProviderClass` to your cluster using the `kubectl apply` command.
```bash kubectl apply -f secretproviderclass.yaml ```
-1. Create a pod by using the following YAML:
+5. Create a pod using the following YAML.
```yml # This is a sample pod definition for using SecretProviderClass and the user-assigned identity to access your key vault
Azure AD workload identity (preview) is supported on both Windows and Linux clus
secretProviderClass: "azure-kvname-user-msi" ```
-1. Apply the pod to your cluster:
+6. Apply the pod to your cluster using the `kubectl apply` command.
```bash kubectl apply -f pod.yaml ```+ ## Next steps
-To validate that the secrets are mounted at the volume path that's specified in your pod's YAML, see [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][validate-secrets].
+To validate the secrets are mounted at the volume path specified in your pod's YAML, see [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][validate-secrets].
<!-- LINKS INTERNAL --> [csi-secrets-store-driver]: ./csi-secrets-store-driver.md
-[aad-pod-identity]: ./use-azure-ad-pod-identity.md
-[aad-pod-identity-create]: ./use-azure-ad-pod-identity.md#create-an-identity
[use-managed-identity]: ./use-managed-identity.md [validate-secrets]: ./csi-secrets-store-driver.md#validate-the-secrets
-[enable-system-assigned-identity]: ../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-azure-vm
-[workload-identity-overview]: workload-identity-overview.md
-[how-to-install-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
[az-aks-show]: /cli/azure/aks#az-aks-show
-[az-rest]: /cli/azure/reference-index#az-rest
[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
-[enable-oidc-issuer]: use-oidc-issuer.md
[workload-identity]: ./workload-identity-overview.md
-<!-- LINKS EXTERNAL -->
+[az-account-set]: /cli/azure/account#az-account-set
+[az-identity-create]: /cli/azure/identity#az-identity-create
+[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy
aks Developer Best Practices Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-resource-management.md
Title: Resource management best practices
+ Title: Resource management best practices for Azure Kubernetes Service (AKS)
-description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS)
+description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS).
Previously updated : 03/15/2021 Last updated : 05/25/2023 # Best practices for application developers to manage resources in Azure Kubernetes Service (AKS)
-As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. How you manage your application deployments can negatively impact the end-user experience of services that you provide. To succeed, keep in mind some best practices you can follow as you develop and run applications in AKS.
+As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. The way you manage your application deployments can negatively impact the end-user experience of services you provide.
-This article focuses on running your cluster and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation]. In this article, you learn:
+This article focuses on running your clusters and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
+
+This article covers the following topics:
> [!div class="checklist"]
+>
> * Pod resource requests and limits.
-> * Ways to develop and deploy applications with Bridge to Kubernetes and Visual Studio Code.
+> * Ways to develop, debug, and deploy applications with Bridge to Kubernetes and Visual Studio Code.
## Define pod resource requests and limits > **Best practice guidance**
->
+>
> Set pod requests and limits on all pods in your YAML manifests. If the AKS cluster uses *resource quotas* and you don't define these values, your deployment may be rejected.
-Use pod requests and limits to manage the compute resources within an AKS cluster. Pod requests and limits inform the Kubernetes scheduler which compute resources to assign to a pod.
+Use pod requests and limits to manage compute resources within an AKS cluster. Pod requests and limits inform the Kubernetes scheduler of the compute resources to assign to a pod.
### Pod CPU/Memory requests
-*Pod requests* define a set amount of CPU and memory that the pod needs regularly.
-In your pod specifications, it's **best practice and very important** to define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler cannot take into account the resources your applications require to aid in scheduling decisions.
+*Pod requests* define a set amount of CPU and memory the pod needs regularly.
-Monitor the performance of your application to adjust pod requests.
-* If you underestimate pod requests, your application may receive degraded performance due to over-scheduling a node.
-* If requests are overestimated, your application may have increased difficulty getting scheduled.
+In your pod specifications, it's important you define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler can't consider the resources your applications requires to help with scheduling decisions.
-### Pod CPU/Memory limits**
-*Pod limits* set the maximum amount of CPU and memory that a pod can use.
+Monitor the performance of your application to adjust pod requests. If you underestimate pod requests, your application may receive degraded performance due to over-scheduling a node. If requests are overestimated, your application may have increased scheduling difficulty.
-* *Memory limits* define which pods should be killed when nodes are unstable due to insufficient resources. Without proper limits set, pods will be killed until resource pressure is lifted.
-* While a pod may exceed the *CPU limit* periodically, the pod will not be killed for exceeding the CPU limit.
+### Pod CPU/Memory limits
-Pod limits define when a pod has lost control of resource consumption. When it exceeds the limit, the pod is marked for killing. This behavior maintains node health and minimizes impact to pods sharing the node. Not setting a pod limit defaults it to the highest available value on a given node.
+*Pod limits* set the maximum amount of CPU and memory a pod can use. *Memory limits* define which pods should be removed when nodes are unstable due to insufficient resources. Without proper limits set, pods are removed until resource pressure is lifted. While a pod may exceed the *CPU limit* periodically, the pod isn't removed for exceeding the CPU limit.
+
+Pod limits define when a pod loses control of resource consumption. When it exceeds the limit, the pod is marked for removal. This behavior maintains node health and minimizes impact to pods sharing the node. If you don't set a pod limit, it defaults to the highest available value on a given node.
Avoid setting a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
Monitor the performance of your application at different times during the day or
> [!IMPORTANT] >
-> In your pod specifications, define these requests and limits based on the above information. Failing to include these values prevents the Kubernetes scheduler from accounting for resources your applications require to aid in scheduling decisions.
+> In your pod specifications, define these requests and limits based on the above information. Failing to include these values prevents the Kubernetes scheduler from accounting for resources your applications requires to help with scheduling decisions.
+
+If the scheduler places a pod on a node with insufficient resources, application performance is degraded. Cluster administrators **must set *resource quotas*** on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
-If the scheduler places a pod on a node with insufficient resources, application performance will be degraded. Cluster administrators **must** set *resource quotas* on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
+When you define a CPU request or limit, the value is measured in CPU units.
-When you define a CPU request or limit, the value is measured in CPU units.
-* *1.0* CPU equates to one underlying virtual CPU core on the node.
- * The same measurement is used for GPUs.
-* You can define fractions measured in millicores. For example, *100m* is *0.1* of an underlying vCPU core.
+* *1.0* CPU equates to one underlying virtual CPU core on the node.
+ * The same measurement is used for GPUs.
+* You can define fractions measured in millicores. For example, *100 m* is *0.1* of an underlying vCPU core.
-In the following basic example for a single NGINX pod, the pod requests *100m* of CPU time, and *128Mi* of memory. The resource limits for the pod are set to *250m* CPU and *256Mi* memory:
+In the following basic example for a single NGINX pod, the pod requests *100 m* of CPU time and *128Mi* of memory. The resource limits for the pod are set to *250 m* CPU and *256Mi* memory.
```yaml kind: Pod
For more information about resource measurements and assignments, see [Managing
## Develop and debug applications against an AKS cluster
-> **Best practice guidance**
+> **Best practice guidance**
> > Development teams should deploy and debug against an AKS cluster using Bridge to Kubernetes.
-With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team collaborate to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code with the Bridge to Kubernetes extension.
+With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team collaborate to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code with the Bridge to Kubernetes extension.
-Using integrated development and test process with Bridge to Kubernetes reduces the need for local test environments like [minikube][minikube]. Instead, you develop and test against an AKS cluster, even secured and isolated clusters.
+Using integrated development and test process with Bridge to Kubernetes reduces the need for local test environments like [minikube][minikube]. Instead, you develop and test against an AKS cluster, even in secured and isolated clusters.
> [!NOTE]
-> Bridge to Kubernetes is intended for use with applications that run on Linux pods and nodes.
+> Bridge to Kubernetes is intended for use with applications running on Linux pods and nodes.
## Use the Visual Studio Code (VS Code) extension for Kubernetes
-> **Best practice guidance**
+> **Best practice guidance**
> > Install and use the VS Code extension for Kubernetes when you write YAML manifests. You can also use the extension for integrated deployment solution, which may help application owners that infrequently interact with the AKS cluster.
-The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides:
-* Intellisense for Kubernetes resources, Helm charts, and templates.
-* Browse, deploy, and edit capabilities for Kubernetes resources from within VS Code.
-* An intellisense check for resource requests or limits being set in the pod specifications:
+The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides the following features:
+
+* Intellisense for Kubernetes resources, Helm charts, and templates.
+* The ability to browse, deploy, and edit capabilities for Kubernetes resources from within VS Code.
+* Intellisense checks for resource requests or limits being set in the pod specifications:
![VS Code extension for Kubernetes warning about missing memory limits](media/developer-best-practices-resource-management/vs-code-kubernetes-extension.png)
The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you d
This article focused on how to run your cluster and workloads from a cluster operator perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
-To implement some of these best practices, see the following articles:
-
-* [Develop with Bridge to Kubernetes][btk]
+To implement some of these best practices, see [Develop with Bridge to Kubernetes][btk].
<!-- EXTERNAL LINKS --> [k8s-resource-limits]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
To implement some of these best practices, see the following articles:
[btk]: /visualstudio/containers/overview-bridge-to-kubernetes [operator-best-practices-isolation]: operator-best-practices-cluster-isolation.md [resource-quotas]: operator-best-practices-scheduler.md#enforce-resource-quotas
-[k8s-node-selector]: concepts-clusters-workloads.md#node-selectors
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx-controller LoadBalancer 10.0.65.205 EXTERNAL-IP 80:30957/TCP,443:32414/TCP 1m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx ```
-No ingress rules have been created yet, so the NGINX ingress controller's default 404 page is displayed if you browse to the external IP address. Ingress rules are configured in the following steps.
+If you browse to the external IP address at this stage, you see a 404 page displayed. This is because you still need to set up the connection to the external IP, which is done in the next sections.
## Run demo applications
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
The following example output shows the resource group created successfully:
To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see [configure Azure CNI networking][use-advanced-networking]. Use the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster*. This command will create the necessary network resources if they don't exist. * The cluster is configured with two nodes.
-* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the `--windows-admin-password` parameter, you will be prompted to provide a value.
+* The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password].
* The node pool uses `VirtualMachineScaleSets`. > [!NOTE] > To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool.
-Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell).
-
-```azurecli-interactive
-echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
-```
-
-Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for the Windows Server nodes on your cluster. Alternatively, you can use the `--windows-admin-password` parameter and specify your own value there.
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --enable-addons monitoring \
- --generate-ssh-keys \
- --windows-admin-username $WINDOWS_USERNAME \
- --vm-set-type VirtualMachineScaleSets \
- --network-plugin azure
-```
-
-> [!NOTE]
-> If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
->
-> If you do not specify an administrator username and password when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure`, the username is set to *azureuser* and the password is set to a random value.
->
-> The administrator username can't be changed, but you can change the administrator password your AKS cluster uses for Windows Server nodes using `az aks update`. For more details, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
-
+1. Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command (remember that the commands in this article are entered into a BASH shell).
+
+ ```azurecli-interactive
+ echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
+ ```
+
+2. Create a password for the administrator username that you created in the previous step.
+
+ ```azurecli-interactive
+ echo "Please enter the password to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_PASSWORD
+ ```
+
+3. Create your cluster ensuring you specify the `--windows-admin-username` and `--windows-admin-password` parameters. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-addons monitoring \
+ --generate-ssh-keys \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --windows-admin-password $WINDOWS_PASSWORD \
+ --vm-set-type VirtualMachineScaleSets \
+ --network-plugin azure
+ ```
+
+ > [!NOTE]
+ > If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
+ >
+ > If you do not specify an administrator username and password when setting `--vm-set-type VirtualMachineScaleSets` and `--network-plugin azure`, the username is set to *azureuser* and the password is set to a random value.
+ >
+ > The administrator username can't be changed, but you can change the administrator password your AKS cluster uses for Windows Server nodes using `az aks update`. For more details, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
+
After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases. ## Add a Windows node pool
az aks nodepool add \
--resource-group myResourceGroup \ --cluster-name myAKSCluster \ --os-type Windows \
- --os-sku Windows2019 \
+ --os-sku Windows2019 \
--name npwin \ --node-count 1 ```
az aks nodepool add \
--resource-group myResourceGroup \ --cluster-name myAKSCluster \ --os-type Windows \
- --os-sku Windows2022 \
+ --os-sku Windows2022 \
--name npwin \ --node-count 1 ```
kubectl get nodes -o wide
The following example output shows all nodes in the cluster. Make sure that the status of all nodes is *Ready*: ```output
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.20.7 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aks-nodepool1-12345678-vmss000001 Ready agent 34m v1.20.7 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
-aksnpwcd123456 Ready agent 9m6s v1.20.7 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1879 containerd://1.4.4+unknown
-aksnpwin987654 Ready agent 25m v1.20.7 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1879 docker://19.3.14
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-90538373-vmss000000 Ready agent 54m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS 5.15.0-1035-azure containerd://1.6.18+azure-1
+aks-nodepool1-90538373-vmss000001 Ready agent 55m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS 5.15.0-1035-azure containerd://1.6.18+azure-1
+aksnpwin000000 Ready agent 40m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter 10.0.20348.1668 containerd://1.6.14+azure
``` > [!NOTE]
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Kubernetes cluster tutorial.
[dotnet-samples]: https://hub.docker.com/_/microsoft-dotnet-framework-samples/ [node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[aks-release-notes]: https://github.com/Azure/AKS/releases
<!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Last updated 05/24/2023
# Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS)
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage Kubernetes clusters. In this tutorial, will:
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage Kubernetes clusters. In this tutorial, you:
* Deploy an AKS cluster using the Azure CLI with OpenID Connect (OIDC) Issuer and managed identity. * Create an Azure Key Vault and secret.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
* This tutorial assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. * If you aren't familiar with Azure AD workload identity, see the [Azure AD workload identity overview][workload-identity-overview].
-* When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups].
+* When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups]
## Prerequisites
aks Monitor Apiserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-apiserver.md
Kubernetes audit logging isn't enabled by default on an AKS cluster on account o
* **Destination details:** Select the checkbox for **Log Analytics**. > [!NOTE]
-> There could be substantial cost involved once kube-audit logs are enabled. Consider disabling kube-audit logging when not required.
+> There could be substantial cost involved once kube-audit logs are enabled. Consider disabling kube-audit logging when not required. An alternative approach to significantly reduce the number of logs and help reduce cost is by enabling collection from kube-audit-admin, which excludes the get and list audit events.
> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor][cost-optimization-azure-monitor]. After a few moments, the new setting appears in your list of settings for this resource. Logs are streamed to the specified destinations as new event data is generated. It might take up to 15 minutes between when an event is emitted and when it appears in a [Log Analytics workspace][log-analytics-workspace-overview].
For more information about AKS metrics, logs, and other important values, see [M
[cost-optimization-azure-monitor]: ../azure-monitor/best-practices-cost.md [azure-diagnostics-table]: /azure/azure-monitor/reference/tables/azurediagnostics [container-insights-overview]: ..//azure-monitor/containers/container-insights-overview.md
-[monitoring-aks-data-reference]: monitor-aks-reference.md
+[monitoring-aks-data-reference]: monitor-aks-reference.md
aks Open Service Mesh Istio Migration Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-istio-migration-guidance.md
+
+ Title: Migration guidance for Open Service Mesh to Istio
+description: Migration guidance for Open Service Mesh configurations to Istio
++ Last updated : 5/15/2023+++
+# Migration guidance for Open Service Mesh (OSM) configurations to Istio
+
+> [!IMPORTANT]
+> This article aims to provide a simplistic understanding of how to identify OSM configurations and translate them to equivalent Istio configurations for migrating workloads from OSM to Istio. This by no means, is considered to be an exhaustive detailed guide.
+
+This article provides practical guidance for mapping OSM policies to the [Istio](https://istio.io/) policies to help migrate your microservices deployments managed by OSM over to being managed by Istio. We utilize the OSM [Bookstore sample application](https://docs.openservicemesh.io/docs/getting_started/install_apps/) as a base reference for current OSM users. The following walk-through deploys the Bookstore application. The same steps are followed and explain how to apply the OSM [SMI](https://smi-spec.io/) traffic policies using the Istio equivalent.
+
+If you are not using OSM and are new to Istio, start with [Istio's own Getting Started guide](https://istio.io/latest/docs/setup/getting-started/) to learn how to use the Istio service mesh for your applications. If you are currently using OSM, make sure you are familiar with the OSM [Bookstore sample application](https://docs.openservicemesh.io/docs/getting_started/install_apps/) walk-through on how OSM configures traffic policies. The following walk-through does not duplicate the current documentation, and reference specific topics when relevant. You should be comfortable and fully aware of the bookstore application architecture before proceeding.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+- OSM is uninstalled from your Kubernetes cluster
+- Any existing OSM Bookstore application, including namespaces, is uninstalled and deleted from your cluster
+- [Install the Istio AKS service mesh add-on](/azure/aks/istio-deploy-addon.md)
+
+## Modifications needed to the OSM Sample Bookstore Application
+
+To allow for Istio to manage the OSM bookstore application, there are a couple of changes needed in the existing manifests. Those changes are with the bookstore and the mysql services.
+
+### Bookstore Modifications
+
+In the OSM Bookstore walk-through, the bookstore service is deployed along with another bookstore-v2 service to demonstrate how OSM provides traffic shifting. This deployed services allowed you to split the client (`bookbuyer`) traffic between multiple service endpoints. The first new concept to understand how Istio handles what they refer to as [Traffic Shifting](https://istio.io/latest/docs/tasks/traffic-management/traffic-shifting/).
+
+OSM implementation of traffic shifting is based on the [SMI Traffic Split specification](https://github.com/servicemeshinterface/smi-spec/blob/main/apis/traffic-split/v1alpha4/traffic-split.md). The SMI Traffic Split specification requires the existence of multiple top-level services that are added as backends with the desired weight metric to shift client requests from one service to another. Istio accomplishes traffic shifting using a combination of a [Virtual Service](https://istio.io/latest/docs/reference/config/networking/virtual-service/) and a [Destination Rule](https://istio.io/latest/docs/reference/config/networking/destination-rule/). It is highly recommended that you familiarize yourself with both the concepts of a virtual service and destination rule.
+
+Put simply, the Istio virtual service defines routing rules for clients that request the host (service name). Virtual Services allows for multiple versions of a deployment to be associated to one virtual service hostname for clients to target. Multiple deployments can be labeled for the same service, representing different versions of the application behind the same hostname. The Istio virtual service can then be configured to weight the request to a specific version of the service. The available versions of the service are configured to use the `subsets` attribute in a Istio destination rule.
+
+The modification made to the bookstore service and deployment for Istio removes the need to have an explicit second service to target, which the SMI Traffic Split needs. There's no need for another service account for the bookstore v2 service as well, since it's to be consolidated under the bookstore service. The original OSM [traffic-access-v1.yaml](https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/access/traffic-access-v1.yaml) manifest modification to Istio for both the bookstore v1 and v2 are shown in the below [Create Pods, Services, and Service Accounts](#create-pods-services-and-service-accounts) section. We demonstrate how we do traffic splitting, known as traffic shifting later in the walk-through:
+
+### MySql Modifications
+
+Changes to the mysql stateful set are only needed in the service configuration. Under the service specification, OSM needed the `targetPort` and `appProtocol` attributes. These attributes are not needed for Istio. The following updated service for the mysqldb looks like:
+
+```yml
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysqldb
+ labels:
+ app: mysqldb
+ service: mysqldb
+spec:
+ ports:
+ - port: 3306
+ name: tcp
+ selector:
+ app: mysqldb
+```
+
+## Deploy the Modified Bookstore Application
+
+Similar to the OSM Bookstore walk-through, we start with a new install of the bookstore application.
+
+### Create the Namespaces
+
+```bash
+kubectl create namespace bookstore
+kubectl create namespace bookbuyer
+kubectl create namespace bookthief
+kubectl create namespace bookwarehouse
+```
+
+### Add a namespace label for Istio sidecar injection
+
+For OSM, using the command `osm namespace add <namespace>` created the necessary annotations to the namespace for the OSM controller to add automatic sidecar injection. With Istio, you only need to just label a namespace to allow the Istio controller to be instructed to automatically inject the Envoy sidecar proxies.
+
+```bash
+kubectl label namespace bookstore istio-injection=enabled
+kubectl label namespace bookbuyer istio-injection=enabled
+kubectl label namespace bookthief istio-injection=enabled
+kubectl label namespace bookwarehouse istio-injection=enabled
+```
+
+### Deploy the Istio Virtual Service and Destination Rule for Bookstore
+
+As mentioned earlier in the Bookstore Modification section, Istio handles traffic shifting utilizing a VirtualService weight attribute we configure later in the walk-through. We deploy the virtual service and destination rule for the bookstore service. We deploy only the bookstore version 1 even though the bookstore version 2 is deployed. The Istio virtual service is only supplying a route to the version 1 of bookstore. Different from how OSM handles traffic shifting (traffic split), OSM deployed another service for the bookstore version 2 application. OSM needed to set up traffic to be split between client requests using a TrafficSplit. When using traffic shifting with Istio, we can reference shifting traffic to multiple Kubernetes application deployments (versions) labeled for the same service.
+
+In this walk-though, the deployment of both bookstore versions (v1 & v2) is deployed at the same time. Only the version 1 is reachable due to the virtual service configuration. There is no need to deploy another service for bookstore version 2, we enable a route to the bookstore version 2 later when we update the bookstore virtual service and provide the necessary weight attribute to do traffic shifting.
+
+```bash
+kubectl apply -f - <<EOF
+# Create bookstore virtual service
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookstore-virtualservice
+ namespace: bookstore
+spec:
+ hosts:
+ - bookstore
+ http:
+ - route:
+ - destination:
+ host: bookstore
+ subset: v1
+
+# Create bookstore destination rule
+apiVersion: networking.istio.io/v1alpha3
+kind: DestinationRule
+metadata:
+ name: bookstore-destination
+ namespace: bookstore
+spec:
+ host: bookstore
+ subsets:
+ - name: v1
+ labels:
+ app: bookstore
+ version: v1
+ - name: v2
+ labels:
+ app: bookstore
+ version: v2
+EOF
+```
+
+### Create Pods, Services, and Service Accounts
+
+We use a single manifest file that contains the modifications discussed earlier in the walk-through to deploy the `bookbuyer`, `bookthief`, `bookstore`, `bookwarehouse`, and `mysql` applications.
+
+```bash
+kubectl apply -f - <<EOF
+##################################################################################################
+# bookbuyer service
+##################################################################################################
+
+# Create bookbuyer Service Account
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: bookbuyer
+ namespace: bookbuyer
+
+# Create bookbuyer Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookbuyer
+ namespace: bookbuyer
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookbuyer
+ version: v1
+ template:
+ metadata:
+ labels:
+ app: bookbuyer
+ version: v1
+ spec:
+ serviceAccountName: bookbuyer
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ kubernetes.io/os: linux
+ containers:
+ - name: bookbuyer
+ image: openservicemesh/bookbuyer:latest-main
+ imagePullPolicy: Always
+ command: ["/bookbuyer"]
+ env:
+ - name: "BOOKSTORE_NAMESPACE"
+ value: bookstore
+ - name: "BOOKSTORE_SVC"
+ value: bookstore
+
+##################################################################################################
+# bookthief service
+##################################################################################################
+
+# Create bookthief ServiceAccount
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: bookthief
+ namespace: bookthief
+
+# Create bookthief Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookthief
+ namespace: bookthief
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookthief
+ template:
+ metadata:
+ labels:
+ app: bookthief
+ version: v1
+ spec:
+ serviceAccountName: bookthief
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ kubernetes.io/os: linux
+ containers:
+ - name: bookthief
+ image: openservicemesh/bookthief:latest-main
+ imagePullPolicy: Always
+ command: ["/bookthief"]
+ env:
+ - name: "BOOKSTORE_NAMESPACE"
+ value: bookstore
+ - name: "BOOKSTORE_SVC"
+ value: bookstore
+ - name: "BOOKTHIEF_EXPECTED_RESPONSE_CODE"
+ value: "503"
+
+##################################################################################################
+# bookstore service version 1 & 2
+##################################################################################################
+
+# Create bookstore Service
+apiVersion: v1
+kind: Service
+metadata:
+ name: bookstore
+ namespace: bookstore
+ labels:
+ app: bookstore
+spec:
+ ports:
+ - port: 14001
+ name: bookstore-port
+ selector:
+ app: bookstore
++
+# Create bookstore Service Account
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: bookstore
+ namespace: bookstore
++
+# Create bookstore-v1 Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookstore-v1
+ namespace: bookstore
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookstore
+ version: v1
+ template:
+ metadata:
+ labels:
+ app: bookstore
+ version: v1
+ spec:
+ serviceAccountName: bookstore
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ kubernetes.io/os: linux
+ containers:
+ - name: bookstore
+ image: openservicemesh/bookstore:latest-main
+ imagePullPolicy: Always
+ ports:
+ - containerPort: 14001
+ name: web
+ command: ["/bookstore"]
+ args: ["--port", "14001"]
+ env:
+ - name: BOOKWAREHOUSE_NAMESPACE
+ value: bookwarehouse
+ - name: IDENTITY
+ value: bookstore-v1
++
+# Create bookstore-v2 Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookstore-v2
+ namespace: bookstore
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookstore
+ version: v2
+ template:
+ metadata:
+ labels:
+ app: bookstore
+ version: v2
+ spec:
+ serviceAccountName: bookstore
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ kubernetes.io/os: linux
+ containers:
+ - name: bookstore
+ image: openservicemesh/bookstore:latest-main
+ imagePullPolicy: Always
+ ports:
+ - containerPort: 14001
+ name: web
+ command: ["/bookstore"]
+ args: ["--port", "14001"]
+ env:
+ - name: BOOKWAREHOUSE_NAMESPACE
+ value: bookwarehouse
+ - name: IDENTITY
+ value: bookstore-v2
+
+##################################################################################################
+# bookwarehouse service
+##################################################################################################
+
+# Create bookwarehouse Service Account
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: bookwarehouse
+ namespace: bookwarehouse
+
+# Create bookwarehouse Service
+apiVersion: v1
+kind: Service
+metadata:
+ name: bookwarehouse
+ namespace: bookwarehouse
+ labels:
+ app: bookwarehouse
+spec:
+ ports:
+ - port: 14001
+ name: bookwarehouse-port
+ selector:
+ app: bookwarehouse
+
+# Create bookwarehouse Deployment
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: bookwarehouse
+ namespace: bookwarehouse
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: bookwarehouse
+ template:
+ metadata:
+ labels:
+ app: bookwarehouse
+ version: v1
+ spec:
+ serviceAccountName: bookwarehouse
+ nodeSelector:
+ kubernetes.io/arch: amd64
+ kubernetes.io/os: linux
+ containers:
+ - name: bookwarehouse
+ image: openservicemesh/bookwarehouse:latest-main
+ imagePullPolicy: Always
+ command: ["/bookwarehouse"]
+##################################################################################################
+# mysql service
+##################################################################################################
+
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: mysql
+ namespace: bookwarehouse
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysqldb
+ labels:
+ app: mysqldb
+ service: mysqldb
+spec:
+ ports:
+ - port: 3306
+ name: tcp
+ selector:
+ app: mysqldb
+
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: mysql
+ namespace: bookwarehouse
+spec:
+ serviceName: mysql
+ replicas: 1
+ selector:
+ matchLabels:
+ app: mysql
+ template:
+ metadata:
+ labels:
+ app: mysql
+ spec:
+ serviceAccountName: mysql
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mysql:5.6
+ name: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: mypassword
+ - name: MYSQL_DATABASE
+ value: booksdemo
+ ports:
+ - containerPort: 3306
+ name: mysql
+ volumeMounts:
+ - mountPath: /mysql-data
+ name: data
+ readinessProbe:
+ tcpSocket:
+ port: 3306
+ initialDelaySeconds: 15
+ periodSeconds: 10
+ volumes:
+ - name: data
+ emptyDir: {}
+ volumeClaimTemplates:
+ - metadata:
+ name: data
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 250M
+EOF
+```
+
+To view these resources on your cluster, run the following commands:
+
+```bash
+kubectl get pods,deployments,serviceaccounts -n bookbuyer
+kubectl get pods,deployments,serviceaccounts -n bookthief
+
+kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookstore
+kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookwarehouse
+```
+
+### View the Application UIs
+
+Similar to the original OSM walk-through, if you have the OSM repo cloned you can utilize the port forwarding scripts to view the UIs of each application [here](https://release-v1-2.docs.openservicemesh.io/docs/getting_started/install_apps/#view-the-application-uis). For now, we are only concerned to view the `bookbuyer` and `bookthief` UI.
+
+```bash
+cp .env.example .env
+bash <<EOF
+./scripts/port-forward-bookbuyer-ui.sh &
+./scripts/port-forward-bookthief-ui.sh &
+wait
+EOF
+```
+
+In a browser, open up the following urls:
+
+http://localhost:8080 - bookbuyer
+
+http://localhost:8083 - bookthief
+
+## Configure Istio's Traffic Policies
+
+To maintain continuity with the original OSM Bookstore walk-through for the translation to Istio, we discuss [OSM's Permissive Traffic Policy Mode](https://release-v1-2.docs.openservicemesh.io/docs/getting_started/traffic_policies/#permissive-traffic-policy-mode). OSM's permissive traffic policy mode was a concept of allowing or denying traffic in the mesh without any specific [SMI Traffic Access Control rule](https://github.com/servicemeshinterface/smi-spec/blob/main/apis/traffic-access/v1alpha3/traffic-access.md) deployed. The permissive traffic mode configuration existed to allow users to onboard applications into the mesh, while gaining mTLS encryption, without requiring explicit rules to allow applications in the mesh to communicate. The permissive traffic mode feature was to avoid breaking the communications of your application as soon as OSM managed it, and provide time to define your rules while ensuring that application communications was mTLS encrypted. This setting could be set to `true` or `false` via OSM's MeshConfig.
+
+Istio handles mTLS enforcement differently. Different from OSM, Istio's permissive mode automatically configures sidecar proxies to use mTLS but allow the service to accept both plaintext and mTLS traffic. The equivalent to OSM's permissive mode configuration is to utilize Istio's `PeerAuthentication` settings. `PeerAuthentication` can be done granularly at the namespace or for the entire mesh. For more information on Istio's enforcement of mTLS, read the [Istio Mutual TLS Migration article](https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/).
+
+### Enforce Istio Strict Mode on Bookstore Namespaces
+
+It is important to remember, just like OSM's permissive mode, Istio's `PeerAuthentication` configuration is only related to the use of mTLS enforcement. Actual layer-7 policies, much like those used in OSM's HTTPRouteGroups, is handled using Istio's AuthorizationPolicy configurations you see later in the walk-through.
+
+We granularly put the `bookbuyer`, `bookthief`, `bookstore`, and `bookwarehouse` namespaces in Istio's mTLS strict mode.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: security.istio.io/v1beta1
+kind: PeerAuthentication
+metadata:
+ name: bookbuyer
+ namespace: bookbuyer
+spec:
+ mtls:
+ mode: STRICT
+
+apiVersion: security.istio.io/v1beta1
+kind: PeerAuthentication
+metadata:
+ name: bookthief
+ namespace: bookthief
+spec:
+ mtls:
+ mode: STRICT
+
+apiVersion: security.istio.io/v1beta1
+kind: PeerAuthentication
+metadata:
+ name: bookstore
+ namespace: bookstore
+spec:
+ mtls:
+ mode: STRICT
+
+apiVersion: security.istio.io/v1beta1
+kind: PeerAuthentication
+metadata:
+ name: bookwarehouse
+ namespace: bookwarehouse
+spec:
+ mtls:
+ mode: STRICT
+EOF
+```
+
+### Deploy Istio Access Control Policies
+
+Similar to OSM's [SMI Traffic Target](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-access/v1alpha2/traffic-access.md) and [SMI Traffic Specs](https://github.com/servicemeshinterface/smi-spec/blob/v0.6.0/apis/traffic-specs/v1alpha4/traffic-specs.md) resources to define access control and routing policies for the applications to communicate, Istio accomplishes these similar fine-grain controls by using `AuthorizationPolicy` configurations.
+
+Let's walk through translating the bookstore TrafficTarget policy, which specifically allows the `bookbuyer` to communicate to it, with only certain layer-7 path, headers, and methods. The following is a portion of the [traffic-access-v1.yaml](https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.2/manifests/access/traffic-access-v1.yaml) manifest.
+
+```yml
+kind: TrafficTarget
+apiVersion: access.smi-spec.io/v1alpha3
+metadata:
+ name: bookstore
+ namespace: bookstore
+spec:
+ destination:
+ kind: ServiceAccount
+ name: bookstore
+ namespace: bookstore
+ rules:
+ - kind: HTTPRouteGroup
+ name: bookstore-service-routes
+ matches:
+ - buy-a-book
+ - books-bought
+ sources:
+ - kind: ServiceAccount
+ name: bookbuyer
+ namespace: bookbuyer
+
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookstore-service-routes
+ namespace: bookstore
+spec:
+ matches:
+ - name: books-bought
+ pathRegex: /books-bought
+ methods:
+ - GET
+ headers:
+ - "user-agent": ".*-http-client/*.*"
+ - "client-app": "bookbuyer"
+ - name: buy-a-book
+ pathRegex: ".*a-book.*new"
+ methods:
+ - GET
+```
+
+If you notice under the TrafficTarget policy, in the spec is where you can explicitly define what source service can communicate with a destination service. We can see that we are allowing the source `bookbuyer` to be authorized to communicate to the destination bookstore. If we translate the service-to-service authorization from an OSM `TrafficTarget` configuration to an Istio `AuthorizationPolicy`, it looks like this below:
+
+```yml
+apiVersion: security.istio.io/v1beta1
+kind: AuthorizationPolicy
+metadata:
+ name: bookstore
+ namespace: bookstore
+spec:
+ selector:
+ matchLabels:
+ app: bookstore
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookbuyer/sa/bookbuyer"]
+```
+
+In the Istio's `AuthorizationPolicy`, you notice how the OSM TrafficTarget policy destination service is mapped to the selector label match and the namespace the service resides in. The source service is shown under the rules section where there is a source/principles attribute that maps to the service account name for the `bookbuyer` service.
+
+In addition to just the source/destination configuration in the OSM TrafficTarget, OSM binds the use of a HTTPRouteGroup to further define the layer-7 authorization the source has access to. We can see in just the portion of the HTTPRouteGroup below. There are two `matches` for the allowed source service.
+
+```yml
+apiVersion: specs.smi-spec.io/v1alpha4
+kind: HTTPRouteGroup
+metadata:
+ name: bookstore-service-routes
+ namespace: bookstore
+spec:
+ matches:
+ - name: books-bought
+ pathRegex: /books-bought
+ methods:
+ - GET
+ headers:
+ - "user-agent": ".*-http-client/*.*"
+ - "client-app": "bookbuyer"
+ - name: buy-a-book
+ pathRegex: ".*a-book.*new"
+ methods:
+ - GET
+```
+
+There is a `match` named `books-bought` that allows the source to access path `/books-bought` using a `GET` method with host header user-agent and client-app information, and a `buy-a-book` match that uses a regex express for a path containing `.*a-book.*new` using a `GET` method.
+
+We can define these OSM HTTPRouteGroup configurations in the rules section of the Istio `AuthorizationPolicy` shown below:
+
+```yml
+apiVersion: "security.istio.io/v1beta1"
+kind: "AuthorizationPolicy"
+metadata:
+ name: "bookstore"
+ namespace: bookstore
+spec:
+ selector:
+ matchLabels:
+ app: bookstore
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookbuyer/sa/bookbuyer"]
+ - source:
+ namespaces: ["bookbuyer"]
+ to:
+ - operation:
+ methods: ["GET"]
+ paths: ["*/books-bought", "*/buy-a-book/new"]
+ - when:
+ - key: request.headers[User-Agent]
+ values: ["*-http-client/*"]
+ - key: request.headers[Client-App]
+ values: ["bookbuyer"]
+```
+
+We can now deploy the OSM migrated traffic-access-v1.yaml manifest as understood by Istio below. There is not an `AuthorizationPolicy` for the bookthief, so the bookthief UI should stop incrementing books from bookstore v1:
+
+```bash
+kubectl apply -f - <<EOF
+##################################################################################################
+# bookstore policy
+##################################################################################################
+apiVersion: "security.istio.io/v1beta1"
+kind: "AuthorizationPolicy"
+metadata:
+ name: "bookstore"
+ namespace: bookstore
+spec:
+ selector:
+ matchLabels:
+ app: bookstore
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookbuyer/sa/bookbuyer"]
+ - source:
+ namespaces: ["bookbuyer"]
+ to:
+ - operation:
+ methods: ["GET"]
+ paths: ["*/books-bought", "*/buy-a-book/new"]
+ - when:
+ - key: request.headers[User-Agent]
+ values: ["*-http-client/*"]
+ - key: request.headers[Client-App]
+ values: ["bookbuyer"]
+
+##################################################################################################
+# bookwarehouse policy
+##################################################################################################
+apiVersion: security.istio.io/v1beta1
+kind: AuthorizationPolicy
+metadata:
+ name: "bookwarehouse"
+ namespace: bookwarehouse
+spec:
+ selector:
+ matchLabels:
+ app: bookwarehouse
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookstore/sa/bookstore"]
+ - source:
+ namespaces: ["bookstore"]
+ to:
+ - operation:
+ methods: ["POST"]
+
+##################################################################################################
+# mysql policy
+##################################################################################################
+apiVersion: security.istio.io/v1beta1
+kind: AuthorizationPolicy
+metadata:
+ name: "mysql"
+ namespace: bookwarehouse
+spec:
+ selector:
+ matchLabels:
+ app: mysql
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookwarehouse/sa/bookwarehouse"]
+ - source:
+ namespaces: ["bookwarehouse"]
+ to:
+ - operation:
+ ports: ["3306"]
+EOF
+```
+
+### Allowing the Bookthief Application to access Bookstore
+
+Currently there is no `AuthorizationPolicy` that allows for the bookthief to communicate with bookstore. We can deploy the following `AuthorizationPolicy` to allow the bookthief to communicate to the bookstore. You notice the addition for the rule for the bookstore policy that allows the bookthief authorization.
+
+```bash
+kubectl apply -f - <<EOF
+##################################################################################################
+# bookstore policy
+##################################################################################################
+apiVersion: "security.istio.io/v1beta1"
+kind: "AuthorizationPolicy"
+metadata:
+ name: "bookstore"
+ namespace: bookstore
+spec:
+ selector:
+ matchLabels:
+ app: bookstore
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookbuyer/sa/bookbuyer", "cluster.local/ns/bookthief/sa/bookthief"]
+ - source:
+ namespaces: ["bookbuyer", "bookthief"]
+ to:
+ - operation:
+ methods: ["GET"]
+ paths: ["*/books-bought", "*/buy-a-book/new"]
+ - when:
+ - key: request.headers[User-Agent]
+ values: ["*-http-client/*"]
+ - key: request.headers[Client-App]
+ values: ["bookbuyer"]
+
+##################################################################################################
+# bookwarehouse policy
+##################################################################################################
+apiVersion: security.istio.io/v1beta1
+kind: AuthorizationPolicy
+metadata:
+ name: "bookwarehouse"
+ namespace: bookwarehouse
+spec:
+ selector:
+ matchLabels:
+ app: bookwarehouse
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookstore/sa/bookstore"]
+ - source:
+ namespaces: ["bookstore"]
+ to:
+ - operation:
+ methods: ["POST"]
+
+##################################################################################################
+# mysql policy
+##################################################################################################
+apiVersion: security.istio.io/v1beta1
+kind: AuthorizationPolicy
+metadata:
+ name: "mysql"
+ namespace: bookwarehouse
+spec:
+ selector:
+ matchLabels:
+ app: mysql
+ action: ALLOW
+ rules:
+ - from:
+ - source:
+ principals: ["cluster.local/ns/bookwarehouse/sa/bookwarehouse"]
+ - source:
+ namespaces: ["bookwarehouse"]
+ to:
+ - operation:
+ ports: ["3306"]
+EOF
+```
+
+The bookthief UI should now be incrementing books from bookstore v1.
+
+## Configure Traffic Shifting between two Service Versions
+
+To demonstrate how to balance traffic between two versions of a Kubernetes service, known as traffic shifting in Istio. As you recall in a previous section, OSM implementation of traffic shifting relied on two distinct services being deployed and adding those service names to the backend configuration of the `TrafficTarget` policy. This deployment architecture is not needed for how Istio implements traffic shifting. With Istio, we can create multiple deployments that represent each version of the service application and shift traffic to those specific versions via the Istio `virtualservice` configuration.
+
+The currently deployed `virtualservice` only has a route rule to the v1 version of the bookstore shown below:
+
+```yml
+spec:
+ hosts:
+ - bookstore
+ http:
+ - route:
+ - destination:
+ host: bookstore
+ subset: v1
+```
+
+We update the `virtualservice` to shift 100% of the weight to the v2 version of the bookstore.
+
+```bash
+kubectl apply -f - <<EOF
+# Create bookstore virtual service
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookstore-virtualservice
+ namespace: bookstore
+spec:
+ hosts:
+ - bookstore
+ http:
+ - route:
+ - destination:
+ host: bookstore
+ subset: v1
+ weight: 0
+ - destination:
+ host: bookstore
+ subset: v2
+ weight: 100
+EOF
+```
+
+You should now see both the `bookbuyer` and `bookthief` UI incrementing for the `bookstore` v2 service only. You can continue to experiment by changing the `weigth` attribute to shift traffic between the two `bookstore` versions.
+
+## Summary
+
+We hope this walk-through provided the necessary guidance on how to migrate your current OSM policies to Istio policies. Take time and review the [Istio Concepts](https://istio.io/latest/docs/concepts/) and walking through [Istio's own Getting Started guide](https://istio.io/latest/docs/setup/getting-started/) to learn how to use the Istio service mesh to manage your applications.
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
By default, AKS uses a managed identity for its cluster identity. However, you a
As each node and pod receives its own IP address, plan out the address ranges for the AKS subnets. Keep in mind: * The subnet must be large enough to provide IP addresses for every node, pods, and network resource that you deploy. * With both kubenet and Azure CNI networking, each node running has default limits to the number of pods.
-* Each AKS cluster must be placed in its own subnet.
* Avoid using IP address ranges that overlap with existing network resources. * Necessary to allow connectivity to on-premises or peered networks in Azure. * To handle scale out events or cluster upgrades, you need extra IP addresses available in the assigned subnet.
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
# Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
-In this quickstart, you will get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS or Arc-enabled Kubernetes cluster. You will be deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them.
+In this quickstart, you get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS or Arc-enabled Kubernetes cluster. You are deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
-* An AKS or Arc-enabled Kubernetes cluster with the [Dapr cluster extension][dapr-overview] enabled
+* An AKS or Arc-enabled Kubernetes cluster with the [Dapr cluster extension][dapr-overview] enabled.
## Clone the repository
-To obtain the files you'll be using to deploy the sample application, clone the [Quickstarts repository][hello-world-gh] and change to the `hello-kubernetes` directory:
+To obtain the files you are using to deploy the sample application, clone the [Quickstarts repository][hello-world-gh] and change to the `hello-kubernetes` directory:
```bash git clone https://github.com/dapr/quickstarts.git
-cd quickstarts/hello-kubernetes
+cd quickstarts/tutorials/hello-kubernetes/
``` ## Create and configure a state store
-Dapr can use a number of different state stores (Redis, Azure Cosmos DB, DynamoDB, Cassandra, etc.) to persist and retrieve state. For this example, we will use Redis.
+Dapr can use many different state stores such as, Redis, Azure Cosmos DB, DynamoDB, and Cassandra to persist and retrieve state. For this example, we use Redis.
### Create a Redis store
-1. Open the [Azure portal][azure-portal-cache] to start the Azure Redis Cache creation flow.
-2. Fill out the necessary information
-3. Click ΓÇ£CreateΓÇ¥ to kickoff deployment of your Redis instance.
-4. Take note of the hostname of your Redis instance, which you can retrieve from the ΓÇ£OverviewΓÇ¥ in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`.
-5. Once your instance is created, youΓÇÖll need to grab your access key. Navigate to ΓÇ£Access KeysΓÇ¥ under ΓÇ£SettingsΓÇ¥ and create a Kubernetes secret to store your Redis password:
+1. Open the [Azure portal][azure-portal-cache] to start the Azure Cache for Redis creation flow.
+2. Fill out the necessary information.
+3. Click **Create** to kickoff deployment of your Redis instance.
+4. Take note of the hostname of your Redis instance, which you can retrieve from the **Overview** section in Azure. The hostname might be similar to the following example: `xxxxxx.redis.cache.windows.net:6380`.
+5. Once your instance is created, youΓÇÖll need to grab your access key. Navigate to **Access keys** under **Settings** and create a Kubernetes secret to store your Redis password:
```bash kubectl create secret generic redis --from-literal=redis-password=<your-redis-password>
kubectl create secret generic redis --from-literal=redis-password=<your-redis-pa
### Configure the Dapr components
-Once your store is created, you will need to add the keys to the redis.yaml file in the deploy directory of the Hello World repository. Replace the `redisHost` value with your own Redis master address, and the `redisPassword` with your own Secret. You can learn more [here][dapr-component-secrets].
+Once your store is created, you'll need to add the keys to the redis.yaml file in the deploy directory of the Hello World repository. Replace the `redisHost` value with your own Redis master address, and the `redisPassword` with your own Secret. You can learn more [here][dapr-component-secrets].
You will also need to add the following two lines below `redisPassword` to enable connection over TLS:
kubectl apply -f ./deploy/node.yaml
> kubectl rollout status deploy/nodeapp > ```
-This will deploy the Node.js app to Kubernetes. The Dapr control plane will automatically inject the Dapr sidecar to the Pod. If you take a look at the `node.yaml` file, you will see how Dapr is enabled for that deployment:
+This deploys the Node.js app to Kubernetes. The Dapr control plane will automatically inject the Dapr sidecar to the Pod. If you take a look at the `node.yaml` file, you see how Dapr is enabled for that deployment:
* `dapr.io/enabled: true` - this tells the Dapr control plane to inject a sidecar to this deployment.
You should see output similar to the following:
``` > [!TIP]
-> This is a good time to get acquainted with the Dapr dashboard- a convenient interface to check status, information and logs of applications running on Dapr. The following command will make it available on `http://localhost:8080/`:
+> This is a good time to get acquainted with the Dapr dashboard, a convenient interface to check status, information, and logs of applications running on Dapr. To access the dashboard at `http://localhost:8080/`, run the following command:
> ```bash > kubectl port-forward svc/dapr-dashboard -n dapr-system 8080:8080 > ```
You should see output similar to the following:
Take a quick look at the Python app. Navigate to the Python app directory in the `hello-kubernetes` quickstart and open `app.py`.
-This is a basic Python app that posts JSON messages to `localhost:3500`, which is the default listening port for Dapr. You can invoke the Node.js application's `neworder` endpoint by posting to `v1.0/invoke/nodeapp/method/neworder`. The message contains some data with an `orderId` that increments once per second:
+This example is a basic Python app that posts JSON messages to `localhost:3500`, which is the default listening port for Dapr. You can invoke the Node.js application's `neworder` endpoint by posting to `v1.0/invoke/nodeapp/method/neworder`. The message contains some data with an `orderId` that increments once per second:
```python n = 0
kubectl apply -f ./deploy/python.yaml
``` > [!NOTE]
-> As with above, the following command will wait for the deployment to complete:
+> As with the previous command, the following command will wait for the deployment to complete:
> ```bash > kubectl rollout status deploy/pythonapp > ```
After successfully deploying this sample application:
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup <!-- EXTERNAL -->
-[hello-world-gh]: https://github.com/dapr/quickstarts/tree/v1.4.0/hello-kubernetes
+[hello-world-gh]: https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes
[azure-portal-cache]: https://portal.azure.com/#create/Microsoft.Cache [dapr-component-secrets]: https://docs.dapr.io/operations/components/component-secrets/
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
You'll need to store your container images in an Azure Container Registry (ACR)
The below example uses the [`az acr create`][az-acr-create] command to create an ACR named *myhelmacr* in *myResourceGroup* with the *Basic* SKU.
+> [!NOTE]
+> The ACR name that you choose must be unique across the `azurecr.io` domain. If you specify an existing ACR name, an error is returned and the ACR is not created.
+ ```azurecli-interactive az group create --name myResourceGroup --location eastus az acr create --resource-group MyResourceGroup --name myhelmacr --sku Basic
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
The following example output shows the list of cluster nodes.
``` $ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-37463671-vmss000000 Ready agent 2m37s v1.18.10
-aks-nodepool1-37463671-vmss000001 Ready agent 2m28s v1.18.10
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
+aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
``` ## Next steps
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
The following example output shows a list of the current local Docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 7 minutes ago 944MB mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
-tiangolo/uwsgi-nginx-flask python3.6 a16ce562e863 6 weeks ago 944MB
``` To use the *azure-vote-front* container image with ACR, you need to tag the image with the login server address of your registry. The tag is used for routing when pushing container images to an image registry.
REPOSITORY TAG IMAGE ID
mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB mycontainerregistry.azurecr.io/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
-tiangolo/uwsgi-nginx-flask python3.6 a16ce562e863 6 weeks ago 944MB
``` ## Push images to registry
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
The following command uses the sample `docker-compose.yaml` file to create the c
docker-compose up -d ```
-When completed, use the [`docker images`][docker-images] command to see the created images. Three images are downloaded or created. The *azure-vote-front* image contains the front-end application and uses the *nginx-flask* image as a base. The *redis* image is used to start a Redis instance.
+When completed, use the [`docker images`][docker-images] command to see the created images. Two images are downloaded or created. The *azure-vote-front* image contains the front-end application. The *redis* image is used to start a Redis instance.
``` $ docker images-
-REPOSITORY TAG IMAGE ID CREATED SIZE
-mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 9 seconds ago 944MB
-mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
-tiangolo/uwsgi-nginx-flask python3.6 a16ce562e863 6 weeks ago 944MB
+REPOSITORY TAG IMAGE ID CREATED SIZE
+mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 years ago 103MB
+mcr.microsoft.com/azuredocs/azure-vote-front v1 4d4d08c25677 5 years ago 935MB
``` Run the [`docker ps`][docker-ps] command to see the running containers.
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Title: Use pod security policies in Azure Kubernetes Service (AKS)
description: Learn how to control pod admissions using PodSecurityPolicy in Azure Kubernetes Service (AKS) Previously updated : 04/25/2023 Last updated : 05/25/2023 # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview)
Last updated 04/25/2023
> > The pod security policy feature will be deprecated starting with Kubernetes version *1.21* and will be removed in version *1.25*. >
-> The AKS API will mark the pod security policy as `Deprecated` on 06-01-2023 and remove it in version *1.25*. We recommend you migrate to pod security admission controller before the deprecation deadline to stay within Azure support.
+> The AKS API will mark the pod security policy as `Deprecated` on 06-01-2023 and remove it in version *1.25*. We recommend you migrate to [pod security admission controller](use-psa.md) before the deprecation deadline to stay within Azure support.
## Before you begin
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
There are several API Management endpoints to which you can assign a custom doma
| **Developer portal (legacy)** | Default is: `<apim-service-name>.portal.azure-api.net` | | **Developer portal** | Default is: `<apim-service-name>.developer.azure-api.net` | | **Management** | Default is: `<apim-service-name>.management.azure-api.net` |
+| **Configuration API (v2)** | Default is: `<apim-service-name>.configuration.azure-api.net` |
| **SCM** | Default is: `<apim-service-name>.scm.azure-api.net` | ### Considerations
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
+1. On the "Overview" screen, make note of the **Tenant ID**, as well as the **Primary domain**.
1. From the left navigation, select **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your app registration. 1. In **Supported account types**, select the account type that can access this application.
To register the app, perform the following steps:
- **Pick an existing app registration in this directory**: Choose an app registration from the current tenant and automatically gather the necessary app information. The system will attempt to create a new client secret against the app registration and automatically configure your app to use it. A default issuer URL is set based on the supported account types configured in the app registration. If you intend to change this default, consult the table below. - **Provide the details of an existing app registration**: Specify details for an app registration from another tenant or if your account does not have permission in the current tenant to query the registrations. For this option, you must manually fill in the configuration values according to the table below.
+ The **authentication endpoint** for a workforce tenant should be a [value specific to the cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints). For example, a workforce tenant in global Azure would use "https://login.microsoftonline.com" as its authentication endpoint. Make note of the authentication endpoint value, as it is needed to construct the right **Issuer URL**.
+ # [Customer tenant (Preview)](#tab/customer-tenant) For a customer tenant, you must manually fill in the configuration values according to the table below.
+ The **authentication endpoint** for a customer tenant should be `https://<tenant-subdomain>.ciamlogin.com`, replacing *\<tenant-subdomain>* with the default subdomain for the tenant. The default subdomain is part of the **primary domain** for the tenant, which should be of the form `<tenant-subdomain>.onmicrosoft.com` and was set during tenant creation. For example, if the tenant had the domain "contoso.onmicrosoft.com", the tenant subdomain would be "contoso", and the authentication endpoint would be "https://contoso.ciamlogin.com". Make note of the authentication endpoint value, as it is needed to construct the right **Issuer URL**.
+ When filling in the configuration details directly, use the values you collected during the app registration creation process:
To register the app, perform the following steps:
|-|-| |Application (client) ID| Use the **Application (client) ID** of the app registration. | |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the App Service authentication token store.|
- |Issuer URL| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1, omit `/v2.0` in the URL.<br/><br/>Any configuration other than a tenant-specific endpoint will be treated as multi-tenant. In multi-tenant configurations, no validation of the issuer or tenant ID is performed by the system, and these checks should be fully handled in [your app's authorization logic](#authorize-requests).|
- |Allowed Token Audiences| The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If your application represents an API that will be called by other clients, you should also add the **Application ID URI** that you configured on the app registration. There is a limit of 500 characters total across the list of allowed audiences.|
+ |Issuer URL| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the **authentication endpoint** you determined in the previous step for your tenant type and cloud environment, also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. For applications that use Azure AD v1, omit `/v2.0` in the URL. <br/><br/> This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. Any configuration other than a tenant-specific endpoint will be treated as multi-tenant. In multi-tenant configurations, no validation of the issuer or tenant ID is performed by the system, and these checks should be fully handled in [your app's authorization logic](#authorize-requests).|
+ |Allowed Token Audiences| This field is optional. The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If your application represents an API that will be called by other clients, you should also add the **Application ID URI** that you configured on the app registration. There is a limit of 500 characters total across the list of allowed audiences.|
The client secret will be stored as a slot-sticky [application setting] named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
app-service Routine Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/routine-maintenance.md
Azure App Service represents a fleet of scale units, which provide hosting of we
### Are business hours reflected?
-Maintenance operations are optimized to run outside standard business hours (9-5pm) as statistically that is a better timing for any interruptions and restarts of workloads as there is a less stress on the system (in customer applications and transitively also on the platform itself).
+Maintenance operations are optimized to start outside standard business hours (9-5pm) as statistically that is a better timing for any interruptions and restarts of workloads as there is a less stress on the system (in customer applications and transitively also on the platform itself). For App Service Plan and App Service Environment v2, maintenance can continue into business hours during longer maintenance events.
### What are my options to control routine maintenance?
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Follow the steps below to setup the Azure Developer CLI and provision and deploy
-1. Run the `azd up` command to clone, provision and deploy the app resources. Provide the name of the template you wish to use for the `--template` parameter. The `azd up` command will also prompt you to login to Azure and provide a name and location for the app.
+1. Run the `azd init` command to initialize the `azd` app template. Include the `--template` parameter to specify the name of an existing `azd` template you wish to use. More information about working with templates is available on the [choose an `azd` template](/azure/developer/azure-developer-cli/azd-templates) page.
### [Flask](#tab/flask)+
+ For this tutorial, Flask users should specify the [Python (Flask) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app.git) template.
```bash
- azd up --template msdocs-flask-postgresql-sample-app
+ azd init --template msdocs-flask-postgresql-sample-app
``` ### [Django](#tab/django)
-
+
+ For this tutorial, Django users should specify the [Python (Django) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git) template.
+
+ ```bash
+ azd init --template msdocs-django-postgresql-sample-app
+ ```
+
+1. Run the `azd auth login` command to sign-in to Azure.
+
+ ```bash
+ azd auth login
+ ```
+
+1. Run the `azd up` command to provision the necessary Azure resources and deploy the app code. The `azd up` command will also prompt you to select the desired subscription and location to deploy to.
+ ```bash
- azd up --template msdocs-django-postgresql-sample-app
+ azd up
``` 1. When the `azd up` command finishes running, the URL for your deployed web app in the console will be printed. Click, or copy and paste the web app URL into your browser to explore the running app and verify that it is working correctly. All of the Azure resources and application code were set up for you by the `azd up` command.
The sections ahead review the steps that `azd` handled for you in more depth. Yo
### 1. Cloned and initialized the project
-The `azd up` command cloned the sample app project template to your machine. The project template includes the following components:
+The `azd init` command cloned the sample app project template to your machine. The project template includes the following components:
* **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure. * **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure.
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
Previously updated : 02/14/2023 Last updated : 05/23/2023
applications. This article describes the symptoms, cause, and resolution for eac
If the backend health status is **Unhealthy**, the portal view will resemble the following screenshot:
-![Application Gateway backend health - Unhealthy](./media/application-gateway-backend-health-troubleshooting/appgwunhealthy.png)
+[ ![Application Gateway backend health - Unhealthy](./media/application-gateway-backend-health-troubleshooting/appgwunhealthy.png) ](./media/application-gateway-backend-health-troubleshooting/appgwunhealthy.png#lightbox)
Or if you're using an Azure PowerShell, CLI, or Azure REST API query, you'll get a response that resembles the following example:
Learn more about [Application Gateway probe matching](./application-gateway-prob
> [!NOTE] > For all TLS related error messages, to learn more about SNI behavior and differences between the v1 and v2 SKU, check the [TLS overview](ssl-overview.md) page.
-### Backend server certificate invalid CA
+### Common Name (CN) doesn't match
-**Message:** The server certificate used by the backend is not signed by a well-known Certificate Authority (CA). Allow the backend on the Application Gateway by uploading the root certificate of the server certificate used by the backend.
+**Message:**
+(For V2) The Common Name of the leaf certificate presented by the backend server does not match the Probe or Backend Setting hostname of the application gateway.</br>
+(For V1) The Common Name (CN) of the backend certificate doesnΓÇÖt match.
-**Cause:** End-to-end SSL with Application Gateway v2 requires the backend server's certificate to be verified in order to deem the server Healthy. For a TLS/SSL certificate to be trusted, that certificate of the backend server must be issued by a CA that's included in the trusted store of Application Gateway. If the certificate wasn't issued by a trusted CA (for example, if a self-signed certificate was used), users should upload the issuer's certificate to Application Gateway.
+**Cause:**
+(For V2) This occurs when you have selected HTTPS protocol in the backend setting, and neither the Custom ProbeΓÇÖs nor Backend SettingΓÇÖs hostname (in that order) matches the Common Name (CN) of the backend serverΓÇÖs certificate.</br>
+(For V1) The FQDN of the backend pool target doesnΓÇÖt match the Common Name (CN) of the backend serverΓÇÖs certificate.
-**Solution:** Follow these steps to export and upload the trusted root certificate to Application Gateway. (These steps are for Windows clients.)
+**Solution:** The hostname information is critical for backend HTTPS connection since that value is used to set the Server Name Indication (SNI) during TLS handshake. You can fix this problem in the following ways based on your gatewayΓÇÖs configuration.
-1. Sign in to the machine where your application is hosted.
-2. Select Win+R or right-click the **Start** button, and then select **Run**.
-3. Enter `certlm.msc` and select Enter. You can also search for Certificate Manager on the **Start** menu.
-4. Locate the certificate, typically in `Certificates - Local Computer\Personal\Certificates`, and open it.
-5. Select the root certificate and then select **View Certificate**.
-6. In the Certificate properties, select the **Details** tab.
-7. On the **Details** tab, select the **Copy to File** option and save the file in the Base-64 encoded X.509 (.CER) format.
-8. Open the Application Gateway HTTP **Settings** page in the Azure portal.
-9. Open the HTTP settings, select **Add Certificate**, and locate the certificate file that you saved.
-10. Select **Save** to save the HTTP settings.
+For V2,
+* If youΓÇÖre using a Default Probe ΓÇô You can specify a hostname in the associated Backend setting of your application gateway. You can select ΓÇ£Override with specific hostnameΓÇ¥ or ΓÇ£Pick hostname from backend targetΓÇ¥ in the backend setting.
+* If youΓÇÖre using a Custom Probe ΓÇô For Custom Probe, you can use the ΓÇ£hostΓÇ¥ field to specify the Common Name of the backend server certificate. Alternatively, if the Backend Setting is already configured with the same hostname, you can choose ΓÇ£Pick hostname from backend settingΓÇ¥ in the probe settings.
-Alternatively, you can export the root certificate from a client machine by directly accessing the server (bypassing Application Gateway) through browser and exporting the root certificate from the browser.
+For V1, verify the backend pool target's FQDN is same the Common Name (CN).
-For more information about how to extract and upload Trusted Root Certificates in Application Gateway, see [Export trusted root certificate (for v2 SKU)](./certificates-for-backend-authentication.md#export-trusted-root-certificate-for-v2-sku).
+**Tips:** To determine the Common Name (CN) of the backend server(s)ΓÇÖ certificate, you can use any of these methods.
-### Trusted root certificate mismatch
+* By using browser or any client:
+Access the backend server directly (not through Application Gateway) and click on the certificate padlock in the address bar to view the certificate details. You will find it under the ΓÇ£Issued ToΓÇ¥ section.
+[ ![Screenshot that shows certificate details in a browser.](./media/application-gateway-backend-health-troubleshooting/browser-cert.png) ](./media/application-gateway-backend-health-troubleshooting/browser-cert.png#lightbox)
-**Message:** The root certificate of the server certificate used by the backend doesn't match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend.
+* By logging into the backend server (Windows):
+ 1. Sign into the machine where your application is hosted.
+ 2. Select Win+R or right-click the Start button and select Run.
+ 3. Enter certlm.msc and select Enter. You can also search for Certificate Manager on the Start menu.
+ 4. Locate the certificate (typically in Certificates - Local Computer\Personal\Certificates), and open the certificate.
+ 5. On the Details tab, check the certificate Subject.
-**Cause:** End-to-end SSL with Application Gateway v2 requires the backend server's certificate to be verified in order to deem the server Healthy. For a TLS/SSL certificate to be trusted, the backend server certificate must be issued by a CA that's included in the trusted store of Application Gateway. If the certificate wasn't issued by a trusted CA (for example, a self-signed certificate was used), users should upload the issuer's certificate to Application Gateway.
+* By logging to the backend server (Linux):
+Run this OpenSSL command by specifying the right certificate filename ` openssl x509 -in certificate.crt -subject -noout`
-The certificate that has been uploaded to Application Gateway HTTP settings must match the root certificate of the backend server certificate.
+### Backend certificate has expired
-**Solution:** If you receive this error message, there's a mismatch between the certificate that has been uploaded to Application Gateway and the one that was uploaded to the backend server.
+**Message:** Backend certificate is invalid. Current date is not within the "Valid from" and "Valid to" date range on the certificate.
-Follow steps 1-10 in the preceding section to upload the correct trusted root certificate to Application Gateway.
+**Cause:** An expired certificate is deemed unsafe and hence the application gateway marks the backend server with an expired certificate as unhealthy.
-For more information about how to extract and upload Trusted Root Certificates in Application Gateway, see [Export trusted root certificate (for v2 SKU)](./certificates-for-backend-authentication.md#export-trusted-root-certificate-for-v2-sku).
+**Solution:** The solution depends on which part of the certificate chain has expired on the backend server.
-> [!NOTE]
-> This error can also occur if the backend server doesn't exchange the complete chain of the cert, including the Root Intermediate (if applicable) Leaf during the TLS handshake. To verify, you can use OpenSSL commands from any client and connect to the backend server by using the configured settings in the Application Gateway probe.
+For V2 SKU,
+* Expired Leaf (also known as Domain or Server) certificate ΓÇô Renew the server certificate with certificate provider and install the new certificate on the backend server. Ensure that you have installed the complete certificate chain comprising of `Leaf (topmost) > Intermediate(s) > Root`. Based on the type of Certificate Authority (CA), you may take the following actions on your gateway.
+ * Publicly known CA: If the certificate issuer is a well-known CA, you need not take any action on the application gateway.
+ * Private CA: If the leaf certificate is issued by a private CA, you need to check if the signing Root CA certificate has changed. In such cases, you must upload the new Root CA certificate (.CER) to the associated Backend setting of your gateway.
-For example:
+* Expired Intermediate or Root certificate ΓÇô Typically, these certificates have relatively extended validity periods (a decade or two). When Root/Intermediate certificate expires, we recommend you check with your certificate provider for the renewed certificate files. Ensure you have installed this updated and complete certificate chain comprising `Leaf (topmost) > Intermediate(s) > Root` on the backend server.
+ * If the Root certificate remains unchanged or if the issuer is a well-known CA, you need NOT take any action on the application gateway.
+ * When using a Private CA, if the Root CA certificate itself or the root of the renewed Intermediate certificate has changed, you must upload the new Root certificate to the application gatewayΓÇÖs Backend Setting.
-```
-OpenSSL> s_client -connect 10.0.0.4:443 -servername www.example.com -showcerts
-```
+For V1 SKU,
+* Renew the expired Leaf (also known as Domain or Server) certificate with your CA and upload the same leaf certificate (.CER) to the associated Backend setting of your application gateway.
-If the output doesn't show the complete chain of the certificate being returned, export the certificate again with the complete chain, including the root certificate. Configure that certificate on your backend server.
+### The intermediate certificate was not found
+**Message:** The **Intermediate certificate is missing** from the certificate chain presented by the backend server. Ensure the certificate chain is complete and correctly ordered on the backend server.
-```
- CONNECTED(00000188)\
- depth=0 OU = Domain Control Validated, CN = \*.example.com\
- verify error:num=20:unable to get local issuer certificate\
- verify return:1\
- depth=0 OU = Domain Control Validated, CN = \*.example.com\
- verify error:num=21:unable to verify the first certificate\
- verify return:1\
- \-\-\-\
- Certificate chain\
- 0 s:/OU=Domain Control Validated/CN=*.example.com\
- i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2\
- \--BEGIN CERTIFICATE--\
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\
- \--END CERTIFICATE--
-```
+**Cause:** The intermediate certificate(s) is not installed in the certificate chain on the backend server.
-### Backend certificate invalid common name (CN)
+**Solution:** An Intermediate certificate is used to sign the Leaf certificate and is thus needed to complete the chain. Check with your Certificate Authority (CA) for the necessary Intermediate certificate(s) and install them on your backend server. This chain must start with the Leaf Certificate, then the Intermediate certificate(s), and finally, the Root CA certificate. We recommend installing the complete chain on the backend server, including the Root CA certificate. For reference, look at the certificate chain example under [Leaf must be topmost in chain](application-gateway-backend-health-troubleshooting.md#leaf-must-be-topmost-in-chain).
-**Message:** The Common Name (CN) of the backend certificate doesn't match the host header of the probe.
+> [!NOTE]
+> A self-signed certificate which is NOT a Certificate Authority will also result in the same error. This is because application gateway considers such self-signed certificate as "Leaf" certificate and looks for its signing Intermediate certificate. You can follow this article to correctly [generate a self-signed certificate](./self-signed-certificates.md).
-**Cause:** Application Gateway checks whether the host name specified in the backend HTTP settings matches that of the CN presented by the backend serverΓÇÖs TLS/SSL certificate. This verification is Standard_v2 and WAF_v2 SKU (V2) behavior. The Standard and WAF SKU (v1) Server Name Indication (SNI) is set as the FQDN in the backend pool address. For more information on SNI behavior and differences between v1 and v2 SKU, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md).
+These images show the difference between the self-signed certificates.
+[ ![Screenshot showing difference between self-signed certificates.](./media/application-gateway-backend-health-troubleshooting/self-signed-types.png) ](./media/application-gateway-backend-health-troubleshooting/self-signed-types.png#lightbox)
-In the v2 SKU, if there's a default probe (no custom probe has been configured and associated), SNI will be set from the host name mentioned in the HTTP settings. Or, if ΓÇ£Pick host name from backend addressΓÇ¥ is mentioned in the HTTP settings, where the backend address pool contains a valid FQDN, this setting will be applied.
+### The leaf or server certificate was not found
+**Message:** The **Leaf certificate is missing** from the certificate chain presented by the backend server. Ensure the chain is complete and correctly ordered on the backend server.
-If there's a custom probe associated with the HTTP settings, SNI will be set from the host name mentioned in the custom probe configuration. Or, if **Pick hostname from backend HTTP settings** is selected in the custom probe, SNI will be set from the host name mentioned in the HTTP settings.
+**Cause:** The Leaf (also known as Domain or Server) certificate is missing from the certificate chain on the backend server.
-If **Pick hostname from backend address** is set in the HTTP settings, the backend address pool must contain a valid FQDN.
+**Solution:** You can get the leaf certificate from your Certificate Authority (CA). Install this leaf certificate and all its signing certificates (Intermediate and Root CA certificates) on the backend server. This chain must start with the Leaf Certificate, then the Intermediate certificate(s), and finally, the Root CA certificate. We recommend installing the complete chain on the backend server, including the Root CA certificate. For reference, look at the certificate chain example under [Leaf must be topmost in chain](application-gateway-backend-health-troubleshooting.md#leaf-must-be-topmost-in-chain).
-If you receive this error message, the CN of the backend certificate doesn't match the host name configured in the custom probe, or the HTTP settings if **Pick hostname from backend HTTP settings** is selected. If you're using a default probe, the host name will be set as **127.0.0.1**. If thatΓÇÖs not a desired value, you should create a custom probe and associate it with the HTTP settings.
+### Server certificate is not issued by a publicly known CA
-**Solution:**
+**Message:** The backend **Server certificate** is not signed by a well-known Certificate Authority (CA). To use unknown CA certificates, its Root certificate must be uploaded to the Backend Setting of the application gateway.
-To resolve the issue, follow these steps.
+**Cause:** You have chosen ΓÇ£well-known CA certificateΓÇ¥ in the backend setting, but the Root certificate presented by the backend server is not publicly known.
-For Windows:
+**Solution:** When a Leaf certificate is issued by a private Certificate Authority (CA), the signing Root CAΓÇÖs certificate must be uploaded to the application gatewayΓÇÖs associated Backend Setting. This enables your application gateway to establish a trusted connection with that backend server. To fix this, go to the associated backend setting, choose ΓÇ£not a well-known CAΓÇ¥ and upload the Root CA certificate (.CER). To identify and download the root certificate, you can follow the same steps as described under [Trusted root certificate mismatch](application-gateway-backend-health-troubleshooting.md#trusted-root-certificate-mismatch-root-certificate-is-available-on-the-backend-server).
-1. Sign in to the machine where your application is hosted.
-2. Select Win+R or right-click the **Start** button and select **Run**.
-3. Enter **certlm.msc** and select Enter. You can also search for Certificate Manager on the **Start** menu.
-4. Locate the certificate (typically in `Certificates - Local Computer\Personal\Certificates`), and open the certificate.
-5. On the **Details** tab, check the certificate **Subject**.
-6. Verify the CN of the certificate from the details and enter the same in the host name field of the custom probe or in the HTTP settings (if **Pick hostname from backend HTTP settings** is selected). If that's not the desired host name for your website, you must get a certificate for that domain or enter the correct host name in the custom probe or HTTP setting configuration.
+### The Intermediate certificate is NOT signed by a publicly known CA.
+**Message:** The **Intermediate certificate** is not signed by a well-known Certificate Authority (CA). Ensure the certificate chain is complete and correctly ordered on the backend server.
-For Linux using OpenSSL:
+**Cause:** You have chosen ΓÇ£well-known CA certificateΓÇ¥ in the backend setting, but the Intermediate certificate presented by the backend server is not signed by any publicly known CA.
-1. Run this command in OpenSSL:
+**Solution:** When a certificate is issued by a private Certificate Authority (CA), the signing Root CAΓÇÖs certificate must be uploaded to the application gatewayΓÇÖs associated Backend Setting. This enables your application gateway to establish a trusted connection with that backend server. To fix this, contact your private CA to get the appropriate Root CA certificate (.CER) and upload that CER file to the Backend Setting of your application gateway by selecting ΓÇ£not a well-known CAΓÇ¥. We also recommend installing the complete chain on the backend server, including the Root CA certificate, for easy verification.
- ```
- openssl x509 -in certificate.crt -text -noout
- ```
+### Trusted root certificate mismatch (no Root certificate on the backend server)
-2. From the properties displayed, find the CN of the certificate and enter the same in the host name field of the http settings. If that's not the desired host name for your website, you must get a certificate for that domain or enter the correct host name in the custom probe or HTTP setting configuration.
+**Message:** The Intermediate certificate not signed by any Root certificates uploaded to the application gateway. Ensure the certificate chain is complete and correctly ordered on the backend server.
-### Backend certificate is invalid
+**Cause:** None of the Root CA certificates uploaded to the associated Backend Setting have signed the Intermediate certificate installed on the backend server. The backend server has only Leaf and Intermediate certificates installed.
-**Message:** Backend certificate is invalid. Current date is not within the "Valid from" and "Valid to" date range on the certificate.
+**Solution:** A Leaf certificate is signed by an Intermediate certificate, which is signed by a Root CA certificate. When using a certificate from Private Certificate Authority (CA), you must upload the corresponding Root CA certificate to the application gateway. Contact your private CA to get the appropriate Root CA certificate (.CER) and upload that CER file to the Backend setting of your application gateway.
++
+### Trusted root certificate mismatch (Root certificate is available on the backend server)
+
+**Message:** The root certificate of the server certificate used by the backend doesn't match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to allowlist the backend.
+
+**Cause:** This error occurs when none of the Root certificates uploaded to your application gatewayΓÇÖs backend setting matches the Root certificate present on the backend server.
+
+**Solution:** This applies to a backend server certificate issued by a Private Certificate Authority (CA) or is a self-signed one. Identify and upload the right Root CA certificate to the associated backend setting.
+
+**Tips:** To identify and download the root certificate, you can use any of these methods.
+
+* Using a browser: Access the backend server directly (not through Application Gateway) and click on the certificate padlock in the address bar to view the certificate details.
+ 1. Choose the root certificate in the chain and click on Export. By default, this will be a .CRT file.
+ 2. Open that .CRT file.
+ 3. Go to the Details tab and click on ΓÇ£Copy to FileΓÇ¥,
+ 4. On Certificate Export Wizard page, click Next,
+ 5. Select ΓÇ£Base-64 encoded X.509 (.CER) and click Next,
+ 6. Give a new file name and click Next,
+ 7. Click Finish to get a .CER file.
+ 8. Upload this Root certificate (.CER) of your private CA to the application gatewayΓÇÖs backend setting.
+
+* By logging into the backend server (Windows)
+ 1. Sign into the machine where your application is hosted.
+ 2. Select Win+R or right-click the Start button, and then select Run.
+ 3. Enter certlm.msc and select Enter. You can also search for Certificate Manager on the Start menu.
+ 4. Locate the certificate, typically in Certificates - Local Computer\Personal\Certificates, and open it.
+ 5. Select the root certificate and then select View Certificate.
+ 6. In the Certificate properties, select the Details tab and click ΓÇ£Copy to FileΓÇ¥,
+ 7. On Certificate Export Wizard page, click Next,
+ 8. Select ΓÇ£Base-64 encoded X.509 (.CER) and click Next,
+ 9. Give a new file name and click Next,
+ 10. Click Finish to get a .CER file.
+ 11. Upload this Root certificate (.CER) of your private CA to the application gatewayΓÇÖs backend setting.
+
+### Leaf must be topmost in chain.
+
+**Message:** The Leaf certificate is not the topmost certificate in the chain presented by the backend server. Ensure the certificate chain is correctly ordered on the backend server.
+
+**Cause:** The Leaf (also known as Domain or Server) certificate is not installed in the correct order on the backend server.
+
+**Solution:** The certificate installation on the backend server must include an ordered list of certificates comprising the leaf certificate and all its signing certificates (Intermediate and Root CA certificates). This chain must start with the leaf certificate, then the Intermediate certificate(s), and finally, the Root CA certificate. We recommend installing the complete chain on the backend server, including the Root CA certificate.
-**Cause:** Every certificate comes with a validity range, and the HTTPS connection won't be secure unless the server's TLS/SSL certificate is valid. The current data must be within the **valid from** and **valid to** range. If it's not, the certificate is considered invalid, and that will create a
-security issue in which Application Gateway marks the backend server as Unhealthy.
+Given is an example of a Server certificate installation along with its Intermediate and Root CA certificates, denoted as depths (0, 1, 2, and so on) in OpenSSL. You can verify the same for your backend serverΓÇÖs certificate using the following OpenSSL commands.</br>
+`s_client -connect <FQDN>:443 -showcerts`</br>
+OR </br>
+`s_client -connect <IPaddress>:443 -servername <TLS SNI hostname> -showcerts`
-**Solution:** If your TLS/SSL certificate has expired, renew the certificate
-with your vendor and update the server settings with the new
-certificate. If it's a self-signed certificate, you must generate a valid certificate and upload the root certificate to the Application Gateway HTTP settings. To do that, follow these steps:
+[ ![Screenshot showing typical chain of certificates.](./media/application-gateway-backend-health-troubleshooting/cert-chain.png) ](./media/application-gateway-backend-health-troubleshooting/cert-chain.png#lightbox)
-1. Open your Application Gateway HTTP settings in the portal.
-2. Select the setting that has the expired certificate, select **Add Certificate**, and open the new certificate file.
-3. Remove the old certificate by using the **Delete** icon next to the certificate, and then select **Save**.
### Certificate verification failed
application-gateway End To End Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/end-to-end-ssl-portal.md
To configure end-to-end TLS with an application gateway, you need a certificate
For end-to-end TLS encryption, the right backend servers must be allowed in the application gateway. To allow this access, upload the public certificate of the backend servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known backend instances. This configuration further secures end-to-end communication. > [!IMPORTANT]
-> If you receive an error message for the backend server certificate, verify that the frontend certificate Common Name (CN) matches the backend certificate CN. For more information, see [Trusted root certificate mismatch](./application-gateway-backend-health-troubleshooting.md#trusted-root-certificate-mismatch)
+> If you receive an error message for the backend server certificate, verify that the frontend certificate Common Name (CN) matches the backend certificate CN. For more information, see [Trusted root certificate mismatch](./application-gateway-backend-health-troubleshooting.md#trusted-root-certificate-mismatch-root-certificate-is-available-on-the-backend-server)
To learn more, see [Overview of TLS termination and end to end TLS with Application Gateway](./ssl-overview.md).
If you choose the latter option, apply the steps in the following procedure.
## Next steps > [!div class="nextstepaction"]
-> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
+> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Previously updated : 04/27/2023 Last updated : 05/25/2023
looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsof
--scope <App-Gateway-Resource-Group-ID> ```
+>[!Note]
+>If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has _Network Contributor_ role assigned to the subnet the Application Gateway is deployed into.
+ ## Using a Service Principal It's also possible to provide AGIC access to ARM via a Kubernetes secret.
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
For the TLS connection to work, you need to ensure that the TLS/SSL certificate
- That the current date and time is within the "Valid from" and "Valid to" date range on the certificate. - That the certificate's "Common Name" (CN) matches the host header in the request. For example, if the client is making a request to `https://www.contoso.com/`, then the CN must be `www.contoso.com`.
-If you have errors with the backend certificate common name (CN), see [Backend certificate invalid common name (CN)](application-gateway-backend-health-troubleshooting.md#backend-certificate-invalid-common-name-cn).
+If you have errors with the backend certificate common name (CN), see our [troubleshooting guide](./application-gateway-backend-health-troubleshooting.md#common-name-cn-doesnt-match).
### Certificates supported for TLS termination
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
monikerRange: '>=form-recog-2.1.0'
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
- In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+ In this article, learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
At a high level, here's how SAS tokens work:
Azure Blob Storage offers three resource types:
## Prerequisites
-To get started, you'll need:
+To get started, you need:
* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/). * A [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
-* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You need to create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
* [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field. * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
To get started, you'll need:
:::image type="content" source="media/sas-tokens/container-upload-button.png" alt-text="Screenshot that shows the container Upload button in the Azure portal.":::
-1. The **Upload blob** window will appear. Select your files to upload.
+1. The **Upload blob** window appears. Select your files to upload.
:::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal.":::
The Azure portal is a web-based console that enables you to manage your Azure su
1. Specify the signed key **Start** and **Expiry** times. * When you create a SAS token, the default duration is 48 hours. After 48 hours, you'll need to create a new token.
- * Consider setting a longer duration period for the time you'll be using your storage account for Form Recognizer Service operations.
- * The value for the expiry time is a maximum of seven days from the creation of the SAS token.
+ * Consider setting a longer duration period for the time you're using your storage account for Form Recognizer Service operations.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS token. The default value is HTTPS.
Azure Storage Explorer is a free standalone app that enables you to easily manag
### Get started
-* You'll need the [**Azure Storage Explorer**](../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+* You need the [**Azure Storage Explorer**](../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
* After the Azure Storage Explorer app is installed, [connect it the storage account](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Form Recognizer.
Azure Storage Explorer is a free standalone app that enables you to easily manag
* Select **key1** or **key2**. * Review and select **Create**.
-1. A new window will appear with the **Container** name, **SAS URL**, and **Query string** for your container.
+1. A new window appears with the **Container** name, **SAS URL**, and **Query string** for your container.
1. **Copy and paste the SAS URL and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
Azure Storage Explorer is a free standalone app that enables you to easily manag
## Use your SAS URL to grant access
-The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the resources may be accessed by the client.
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
### REST API
automation Enable Vms Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-vms-monitoring-agent.md
Title: Enable Azure Automation Change Tracking for single machine and multiple m
description: This article tells how to enable the Change Tracking feature for single machine and multiple machines at scale from the Azure portal. Previously updated : 03/16/2023 Last updated : 05/18/2023
This section provides detailed procedure on how you can enable change tracking o
## Enable Change Tracking at scale using Azure Monitoring Agent
+### Prerequisite
+- You must [create the Data collection rule](#create-data-collection-rule).
+
+### Enable Change tracking
+ Using the Deploy if not exist (DINE) policy, you can enable Change tracking with Azure Monitoring Agent at scale and in the most efficient manner. 1. In Azure portal, select **Policy**.
-1. In the **Policy|Definitions** page, in **Authoring**, select **Definitions**
-1. In the **Definition Type** category, select **Initiative** and in **Category**, select **ChangeTracking andInventory**
- You'll see a list of three policies:
- - Enable ChangeTracking and inventory for Virtual Machine Scale Sets
- - Enable ChangeTracking and inventory for virtual machines
- - Enable ChangeTracking and inventory for Arc-enabled virtual machines
-1. Select **Enable ChangeTracking and Inventory for virtual machines** to enable the change tracking on Azure virtual machines.
+1. In the **Policy** page, under **Authoring**, select **Definitions**
+1. In **Policy | Definitions** page, under the **Definition Type** category, select **Initiative** and in **Category**, select **Change Tracking and Inventory**. You'll see a list of three policies:
+
+ #### [Arc-enabled virtual machines](#tab/arcvm)
+
+ - Select *Enable Change Tracking and Inventory for Arc-enabled virtual machines*.
+
+ :::image type="content" source="media/enable-vms-monitoring-agent/enable-for-arc-virtual-machine-manager-inline.png" alt-text="Screenshot showing the selection of Arc-enabled virtual machines." lightbox="media/enable-vms-monitoring-agent/enable-for-arc-virtual-machine-manager-expanded.png":::
+
+ #### [Virtual machines Scale Sets](#tab/vmss)
+
+ - Select *Enable Change Tracking and inventory for Virtual Machine Scale Sets*.
+
+ :::image type="content" source="media/enable-vms-monitoring-agent/enable-for-virtual-machine-scale-set-inline.png" alt-text="Screenshot showing the selection of virtual machines scale sets." lightbox="media/enable-vms-monitoring-agent/enable-for-virtual-machine-scale-set-expanded.png":::
+
+ #### [Virtual machines](#tab/vm)
+
+ - Select *Enable Change Tracking and inventory for virtual machines*.
+
+ :::image type="content" source="media/enable-vms-monitoring-agent/enable-for-vm-inline.png" alt-text="Screenshot showing the selection of virtual machines." lightbox="media/enable-vms-monitoring-agent/enable-for-vm-expanded.png":::
+
+
+1. Select *Enable Change Tracking and Inventory for virtual machines* to enable the change tracking on Azure virtual machines.
This initiative consists of three policies:+ - Assign Built in User-Assigned Managed identity to Virtual machines - Configure ChangeTracking Extension for Windows virtual machines
- - Configure ChangeTracking Extension for Linux virtual machines
-1. Select **Assign** to assign the policy to a resource group. For example, **Assign Built in User-Assigned Managed identity to virtual machines**.
- >[!NOTE]
- >The Resource group contains virtual machines and when you assign the policy, it
- will enable change tracking at scale to a resource group. The virtual machines
- that are on-boarded to the same resource group will automatically have the
- change tracking feature enabled.
-1. In the **Enable ChangeTracking and Inventory for virtual machines** page, enter the following options:
+ - Configure ChangeTracking Extension for Linux virtual machines
+
+ :::image type="content" source="media/enable-vms-monitoring-agent/enable-change-tracking-virtual-machines-inline.png" alt-text="Screenshot showing the selection of three policies." lightbox="media/enable-vms-monitoring-agent/enable-change-tracking-virtual-machines-expanded.png":::
+
+1. Select **Assign** to assign the policy to a resource group. For example, *Assign Built in User-Assigned Managed identity to virtual machines*.
+
+ > [!NOTE]
+ > The Resource group contains virtual machines and when you assign the policy, it will enable change tracking at scale to a resource group. The virtual machines that are on-boarded to the same resource group will automatically have the change tracking feature enabled.
+
+1. In the **Enable Change Tracking and Inventory for virtual machines** page, enter the following options:
1. In **Basics**, you can define the scope. Select the three dots to configure a scope. In the **Scope** page, provide the **Subscription** and **Resource group**. 1. In **Parameters**, select the option in the **Bring your own user assigned managed identity**. 1. Provide the **Data Collection Rule Resource id**. Learn more on [how to obtain the Data Collection Rule Resource ID after you create the Data collection rule](#create-data-collection-rule).
azure-app-configuration Concept Point Time Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-point-time-snapshot.md
Title: Retrieve key-values from a point-in-time
-description: Retrieve old key-value pairs using point-in-time snapshots in Azure App Configuration, which maintains a record of changes to key-values.
+description: Retrieve old key-values using point-in-time revisions in Azure App Configuration, which maintains a record of changes to key-values.
Previously updated : 03/14/2022 Last updated : 05/24/2023
-# Point-in-time snapshot
+# Point-in-time key-values
Azure App Configuration maintains a record of changes made to key-values. This record provides a timeline of key-value changes. You can reconstruct the history of any key and provide its past value at any moment within the key history period (7 days for Free tier stores, or 30 days for Standard tier stores). Using this feature, you can ΓÇ£time-travelΓÇ¥ backward and retrieve an old key-value. For example, you can recover configuration settings used before the most recent deployment in order to roll back the application to the previous configuration.
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
az sql instance-failover-group-arc update --k8s-namespace my-namespace --name se
``` Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired.
-At this point, if you plan to continue running the production workload off of the secondary site, the `--license-type` needs to be updated to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
+## Post failover operations
+Once you perform a failover from primary site to secondary site, either with or without data loss, you may need to do the following:
+- Update the connection string for your applications to connect to the newly promoted primary Arc SQL managed instance
+- If you plan to continue running the production workload off of the secondary site, update the `--license-type` to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
+ ## Next steps
-[Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
+[Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption plan apps on Windows and for Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
Title: Continuously update function app code using Azure Pipelines
description: Learn how to set up an Azure DevOps pipeline that targets Azure Functions. Previously updated : 02/25/2022 Last updated : 05/15/2023 ms.devlang: azurecli
+zone_pivot_groups: functions-task-versions
# Continuous delivery with Azure Pipelines Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy to Azure Functions. Azure Pipelines lets you build, test, and deploy with continuous integration (CI) and continuous delivery (CD) using [Azure DevOps](/azure/devops/).
-YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (pre-packaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts).
+YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (prepackaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts).
+
+You'll use the AzureFunctionApp task to deploy to Azure Functions. There are now two versions of the AzureFunctionApp task ([AzureFunctionApp@1](/azure/devops/pipelines/tasks/reference/azure-function-app-v1), [AzureFunctionApp@2](/azure/devops/pipelines/tasks/reference/azure-function-app-v2)). AzureFunctionApp@2 includes enhanced validation support that makes pipelines less likely to fail because of errors.
+
+Choose your task version at the top of the article. YAML pipelines aren't available for Azure DevOps 2019 and earlier.
-YAML pipelines aren't available for Azure DevOps 2019 and earlier.
## Prerequisites * A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com).
YAML pipelines aren't available for Azure DevOps 2019 and earlier.
* An ability to run pipelines on Microsoft-hosted agents. You can either purchase a [parallel job](/azure/devops/pipelines/licensing/concurrent-jobs) or you can request a free tier.
-## Create your function app
+* A function app with its code in a GitHub repository. If you don't yet have an Azure Functions code project, you can create one by completing the following language-specific article:
+ # [C\#](#tab/csharp)
+
+ [Quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md)
+
+ # [JavaScript](#tab/javascript)
+
+ [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](create-first-function-vs-code-node.md)
+
+ # [Python](#tab/python)
+
+ [Quickstart: Create a function in Azure with Python using Visual Studio Code](create-first-function-vs-code-python.md)
+
+ # [PowerShell](#tab/powershell)
+
+ [Quickstart: Create a PowerShell function in Azure using Visual Studio Code](create-first-function-vs-code-powershell.md)
+
+
++
+## Build your app
+
+# [YAML](#tab/yaml)
+
+1. Sign in to your Azure DevOps organization and navigate to your project.
+1. In your project, navigate to the **Pipelines** page. Then choose the action to create a new pipeline.
+1. Walk through the steps of the wizard by first selecting **GitHub** as the location of your source code.
+1. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
+1. When the list of repositories appears, select your sample app repository.
+1. Azure Pipelines will analyze your repository and recommend a template. Select **Save and run**, then select **Commit directly to the main branch**, and then choose **Save and run** again.
+1. A new run is started. Wait for the run to finish.
+
+# [Classic](#tab/classic)
+
+To get started:
+
+How you build your app in Azure Pipelines depends on your app's programming language. Each language has specific build steps that create a deployment artifact. A deployment artifact is used to update your function app in Azure.
+
+To use built-in build templates, when you create a new build pipeline, select **Use the classic editor** to create a pipeline by using designer templates.
+
+![Screenshot of the Azure Pipelines classic editor.](media/functions-how-to-azure-devops/classic-editor.png)
+
+After you configure the source of your code, search for Azure Functions build templates. Select the template that matches your app language.
+
+![Screenshot of Azure Functions build template.](media/functions-how-to-azure-devops/build-templates.png)
+
+In some cases, build artifacts have a specific folder structure. You might need to select the **Prepend root folder name to archive paths** check box.
+
+![Screenshot of option to prepend the root folder name.](media/functions-how-to-azure-devops/prepend-root-folder.png)
+++
+### Example YAML build pipelines
+
+The following language-specific pipelines can be used for building apps.
+# [C\#](#tab/csharp)
+
+You can use the following sample to create a YAML file to build a .NET app.
+
+If you see errors when building your app, verify that the version of .NET that you use matches your Azure Functions version. For more information, see [Azure Functions runtime versions overview](functions-versions.md).
+
+```yaml
+pool:
+ vmImage: 'windows-latest'
+steps:
+- script: |
+ dotnet restore
+ dotnet build --configuration Release
+- task: DotNetCoreCLI@2
+ inputs:
+ command: publish
+ arguments: '--configuration Release --output publish_output'
+ projects: '*.csproj'
+ publishWebProjects: false
+ modifyOutputPath: false
+ zipAfterPublish: false
+- task: ArchiveFiles@2
+ displayName: "Archive files"
+ inputs:
+ rootFolderOrFile: "$(System.DefaultWorkingDirectory)/publish_output"
+ includeRootFolder: false
+ archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
+- task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
+ artifactName: 'drop'
+```
+
+# [JavaScript](#tab/javascript)
+
+You can use the following sample to create a YAML file to build a JavaScript app:
+
+```yaml
+pool:
+ vmImage: ubuntu-latest # Use 'windows-latest' if you have Windows native +Node modules
+steps:
+- bash: |
+ if [ -f extensions.csproj ]
+ then
+ dotnet build extensions.csproj --output ./bin
+ fi
+ npm install
+ npm run build --if-present
+ npm prune --production
+- task: ArchiveFiles@2
+ displayName: "Archive files"
+ inputs:
+ rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
+ includeRootFolder: false
+ archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
+- task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
+ artifactName: 'drop'
+```
-This is a step-by-step guide to using Azure Pipelines with Azure Functions.
-# [.NET Core](#tab/dotnet-core)
+# [Python](#tab/python)
-If you already have an app at GitHub that you want to deploy, you can try creating a pipeline for that code.
+Use one of the following samples to create a YAML file to build an app for a specific Python version. Python is only supported for function apps running on Linux.
-To use sample code instead, fork this GitHub repo:
+**Version 3.7**
+```yaml
+pool:
+ vmImage: ubuntu-latest
+steps:
+- task: UsePythonVersion@0
+ displayName: "Setting Python version to 3.7 as required by functions"
+ inputs:
+ versionSpec: '3.7'
+ architecture: 'x64'
+- bash: |
+ if [ -f extensions.csproj ]
+ then
+ dotnet build extensions.csproj --output ./bin
+ fi
+ pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
+- task: ArchiveFiles@2
+ displayName: "Archive files"
+ inputs:
+ rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
+ includeRootFolder: false
+ archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
+- task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
+ artifactName: 'drop'
```
- https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp
+
+**Version 3.6**
+
+```yaml
+pool:
+ vmImage: ubuntu-latest
+steps:
+- task: UsePythonVersion@0
+ displayName: "Setting Python version to 3.6 as required by functions"
+ inputs:
+ versionSpec: '3.6'
+ architecture: 'x64'
+- bash: |
+ if [ -f extensions.csproj ]
+ then
+ dotnet build extensions.csproj --output ./bin
+ fi
+ pip install --target="./.python_packages/lib/python3.6/site-packages" -r ./requirements.txt
+- task: ArchiveFiles@2
+ displayName: "Archive files"
+ inputs:
+ rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
+ includeRootFolder: false
+ archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
+- task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
+ artifactName: 'drop'
```
-# [Java](#tab/java)
+# [PowerShell](#tab/powershell)
-If you already have an app at GitHub that you want to deploy, you can try creating a pipeline for that code.
+You can use the following sample to create a YAML file to package a PowerShell app. PowerShell is supported only for Windows Azure Functions.
-To use sample code instead, fork this GitHub repo:
+```yaml
+pool:
+ vmImage: 'windows-latest'
+steps:
+- task: ArchiveFiles@2
+ displayName: "Archive files"
+ inputs:
+ rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
+ includeRootFolder: false
+ archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
+- task: PublishBuildArtifacts@1
+ inputs:
+ PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
+ artifactName: 'drop'
```
- https://github.com/MicrosoftDocs/pipelines-java-function
++
+## Deploy your app
+
+You'll deploy with the [Azure Function App Deploy](/azure/devops/pipelines/tasks/deploy/azure-function-app) task. This task requires an [Azure service connection](/azure/devops/pipelines/library/service-endpoints) as an input. An Azure service connection stores the credentials to connect from Azure Pipelines to Azure.
+
+# [YAML](#tab/yaml)
+
+To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`.
+
+```yaml
+trigger:
+- main
+
+variables:
+ # Azure service connection established during pipeline creation
+ azureSubscription: <Name of your Azure subscription>
+ appName: <Name of the function app>
+ # Agent VM image name
+ vmImageName: 'ubuntu-latest'
+
+- task: AzureFunctionApp@1 # Add this at the end of your file
+ inputs:
+ azureSubscription: <Azure service connection>
+ appType: functionAppLinux # default is functionApp
+ appName: $(appName)
+ package: $(System.ArtifactsDirectory)/**/*.zip
+ #Uncomment the next lines to deploy to a deployment slot
+ #Note that deployment slots is not supported for Linux Dynamic SKU
+ #deployToSlotOrASE: true
+ #resourceGroupName: '<Resource Group Name>'
+ #slotName: '<Slot name>'
```
-# [Nodejs](#tab/nodejs)
+The snippet assumes that the build steps in your YAML file produce the zip archive in the `$(System.ArtifactsDirectory)` folder on your agent.
+
+# [Classic](#tab/classic)
+
+You'll need to create a separate release pipeline to deploy to Azure Functions. When you create a new release pipeline, search for the Azure Functions release template.
+
+![Screenshot of search for the Azure Functions release template.](media/functions-how-to-azure-devops/release-template.png)
+++
+## Deploy a container
-If you already have an app at GitHub that you want to deploy, you can try creating a pipeline for that code.
+You can automatically deploy your code to Azure Functions as a custom container after every successful build. To learn more about containers, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md).
+### Deploy with the Azure Function App for Container task
+
+# [YAML](#tab/yaml/)
+
+The simplest way to deploy to a container is to use the [Azure Function App on Container Deploy task](/azure/devops/pipelines/tasks/deploy/azure-rm-functionapp-containers).
+
+To deploy, add the following snippet at the end of your YAML file:
+
+```yaml
+trigger:
+- main
+
+variables:
+ # Container registry service connection established during pipeline creation
+ dockerRegistryServiceConnection: <Docker registry service connection>
+ imageRepository: <Name of your image repository>
+ containerRegistry: <Name of the Azure container registry>
+ dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
+ tag: '$(Build.BuildId)'
+
+ # Agent VM image name
+ vmImageName: 'ubuntu-latest'
-To use sample code instead, fork this GitHub repo:
+- task: AzureFunctionAppContainer@1 # Add this at the end of your file
+ inputs:
+ azureSubscription: '<Azure service connection>'
+ appName: '<Name of the function app>'
+ imageName: $(containerRegistry)/$(imageRepository):$(tag)
```
- https://github.com/microsoft/devops-project-samples/tree/master/node/plain/functionApp
+
+The snippet pushes the Docker image to your Azure Container Registry. The **Azure Function App on Container Deploy** task pulls the appropriate Docker image corresponding to the `BuildId` from the repository specified, and then deploys the image.
+
+# [Classic](#tab/classic/)
+
+The best way to deploy your function app as a container is to use the [Azure Function App on Container Deploy task](/azure/devops/pipelines/tasks/deploy/azure-rm-functionapp-containers) in your release pipeline.
++
+How you deploy your app depends on your app's programming language. Each language has a template with specific deploy steps. If you can't find a template for your language, select the generic **Azure App Service Deployment** template.
++
+## Deploy to a slot
+
+# [YAML](#tab/yaml)
+
+You can configure your function app to have multiple slots. Slots allow you to safely deploy your app and test it before making it available to your customers.
+
+The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
+
+```yaml
+- task: AzureFunctionApp@1
+ inputs:
+ azureSubscription: <Azure service connection>
+ appType: functionAppLinux
+ appName: <Name of the Function app>
+ package: $(System.ArtifactsDirectory)/**/*.zip
+ deployToSlotOrASE: true
+ resourceGroupName: <Name of the resource group>
+ slotName: staging
+
+- task: AzureAppServiceManage@0
+ inputs:
+ azureSubscription: <Azure service connection>
+ WebAppName: <name of the Function app>
+ ResourceGroupName: <name of resource group>
+ SourceSlot: staging
+ SwapWithProduction: true
```
-***
+# [Classic](#tab/classic)
+
+You can configure your function app to have multiple slots. Slots allow you to safely deploy your app and test it before making it available to your customers.
+
+Use the option **Deploy to Slot** in the **Azure Function App Deploy** task to specify the slot to deploy to. You can swap the slots by using the **Azure App Service Manage** task.
+++
+## Create a pipeline with Azure CLI
+
+To create a build pipeline in Azure, use the `az functionapp devops-pipeline create` [command](/cli/azure/functionapp/devops-pipeline#az-functionapp-devops-pipeline-create). The build pipeline is created to build and release any code changes that are made in your repo. The command generates a new YAML file that defines the build and release pipeline and then commits it to your repo. The prerequisites for this command depend on the location of your code.
+
+- If your code is in GitHub:
+
+ - You must have **write** permissions for your subscription.
+
+ - You must be the project administrator in Azure DevOps.
+
+ - You must have permissions to create a GitHub personal access token (PAT) that has sufficient permissions. For more information, see [GitHub PAT permission requirements.](/azure/devops/pipelines/repos/github#repository-permissions-for-personal-access-token-pat-authentication)
+
+ - You must have permissions to commit to the main branch in your GitHub repository so you can commit the autogenerated YAML file.
+
+- If your code is in Azure Repos:
+
+ - You must have **write** permissions for your subscription.
+
+ - You must be the project administrator in Azure DevOps.
+++ ## Build your app
How you build your app in Azure Pipelines depends on your app's programming lang
To use built-in build templates, when you create a new build pipeline, select **Use the classic editor** to create a pipeline by using designer templates.
-![Select the Azure Pipelines classic editor](media/functions-how-to-azure-devops/classic-editor.png)
+![Screenshot of select the Azure Pipelines classic editor.](media/functions-how-to-azure-devops/classic-editor.png)
After you configure the source of your code, search for Azure Functions build templates. Select the template that matches your app language.
-![Select an Azure Functions build template](media/functions-how-to-azure-devops/build-templates.png)
+![Screenshot of select an Azure Functions build template.](media/functions-how-to-azure-devops/build-templates.png)
In some cases, build artifacts have a specific folder structure. You might need to select the **Prepend root folder name to archive paths** check box.
-![The option to prepend the root folder name](media/functions-how-to-azure-devops/prepend-root-folder.png)
+![Screenshot of the option to prepend the root folder name.](media/functions-how-to-azure-devops/prepend-root-folder.png)
steps:
## Deploy your app
-You'll deploy with the [Azure Function App Deploy](/azure/devops/pipelines/tasks/deploy/azure-function-app) task. This task requires an [Azure service connection](/azure/devops/pipelines/library/service-endpoints) as an input. An Azure service connection stores the credentials to connect from Azure Pipelines to Azure.
+You'll deploy with the [Azure Function App Deploy v2](/azure/devops/pipelines/tasks/reference/azure-function-app-v2) task. This task requires an [Azure service connection](/azure/devops/pipelines/library/service-endpoints) as an input. An Azure service connection stores the credentials to connect from Azure Pipelines to Azure.
+
+The v2 version of the task includes support for newer applications stacks for .NET, Python, and Node. The task includes networking predeployment checks and deployment won't proceed when there are issues.
# [YAML](#tab/yaml)
variables:
# Agent VM image name vmImageName: 'ubuntu-latest' -- task: AzureFunctionApp@1 # Add this at the end of your file
+- task: AzureFunctionApp@2 # Add this at the end of your file
inputs: azureSubscription: <Azure service connection> appType: functionAppLinux # default is functionApp appName: $(appName) package: $(System.ArtifactsDirectory)/**/*.zip
+ deploymentMethod: 'auto' # 'auto' | 'zipDeploy' | 'runFromPackage'. Required. Deployment method. Default: auto.
#Uncomment the next lines to deploy to a deployment slot #Note that deployment slots is not supported for Linux Dynamic SKU #deployToSlotOrASE: true
The snippet assumes that the build steps in your YAML file produce the zip archi
You'll need to create a separate release pipeline to deploy to Azure Functions. When you create a new release pipeline, search for the Azure Functions release template.
-![Search for the Azure Functions release template](media/functions-how-to-azure-devops/release-template.png)
+![Screenshot of search for the Azure Functions release template.](media/functions-how-to-azure-devops/release-template.png)
You can configure your function app to have multiple slots. Slots allow you to s
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot: ```yaml-- task: AzureFunctionApp@1
+- task: AzureFunctionApp@2
inputs: azureSubscription: <Azure service connection> appType: functionAppLinux appName: <Name of the Function app> package: $(System.ArtifactsDirectory)/**/*.zip
+ deploymentMethod: 'auto'
deployToSlotOrASE: true resourceGroupName: <Name of the resource group> slotName: staging
You can configure your function app to have multiple slots. Slots allow you to s
Use the option **Deploy to Slot** in the **Azure Function App Deploy** task to specify the slot to deploy to. You can swap the slots by using the **Azure App Service Manage** task. + ## Create a pipeline with Azure CLI To create a build pipeline in Azure, use the `az functionapp devops-pipeline create` [command](/cli/azure/functionapp/devops-pipeline#az-functionapp-devops-pipeline-create). The build pipeline is created to build and release any code changes that are made in your repo. The command generates a new YAML file that defines the build and release pipeline and then commits it to your repo. The prerequisites for this command depend on the location of your code.
To create a build pipeline in Azure, use the `az functionapp devops-pipeline cre
- You must be the project administrator in Azure DevOps. ## Next steps - Review the [Azure Functions overview](functions-overview.md). - Review the [Azure DevOps overview](/azure/devops/pipelines/).+
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
Title: Add controls to a map | Microsoft Azure Maps description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps.-- Previously updated : 07/29/2019++ Last updated : 05/15/2023
# Add controls to a map
-This article shows you how to add controls to a map. You'll also learn how to create a map with all controls and a [style picker](./choose-map-style.md).
+This article shows you how to add controls to a map. You'll also learn how to create a map with all controls and a [style picker].
## Add zoom control
-A zoom control adds buttons for zooming the map in and out. The following code sample creates an instance of the [ZoomControl](/javascript/api/azure-maps-control/atlas.control.zoomcontrol) class, and adds it the bottom-right corner of the map.
+A zoom control adds buttons for zooming the map in and out. The following code sample creates an instance of the [ZoomControl] class, and adds it the bottom-right corner of the map.
```javascript //Construct a zoom control and add it to the map.
map.controls.add(new atlas.control.ZoomControl(), {
}); ```
-Below is the complete running code sample of the above functionality.
-
+<!-
<br/>- <iframe height='500' scrolling='no' title='Adding a zoom control' src='//codepen.io/azuremaps/embed/WKOQyN/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WKOQyN/'>Adding a zoom control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## Add pitch control
-A pitch control adds buttons for tilting the pitch to map relative to the horizon. The following code sample creates an instance of the [PitchControl](/javascript/api/azure-maps-control/atlas.control.pitchcontrol) class. It adds the PitchControl to top-right corner of the map.
+A pitch control adds buttons for tilting the pitch to map relative to the horizon. The following code sample creates an instance of the [PitchControl] class. It adds the PitchControl to top-right corner of the map.
```javascript //Construct a pitch control and add it to the map.
map.controls.add(new atlas.control.PitchControl(), {
}); ```
-Below is the complete running code sample of the above functionality.
-
+<!-
<br/>- <iframe height='500' scrolling='no' title='Adding a pitch control' src='//codepen.io/azuremaps/embed/xJrwaP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/xJrwaP/'>Adding a pitch control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## Add compass control
-A compass control adds a button for rotating the map. The following code sample creates an instance of the [CompassControl](/javascript/api/azure-maps-control/atlas.control.compasscontrol) class and adds it the bottom-left corner of the map.
+A compass control adds a button for rotating the map. The following code sample creates an instance of the [CompassControl] class and adds it the bottom-left corner of the map.
```javascript //Construct a compass control and add it to the map.
map.controls.add(new atlas.control.CompassControl(), {
}); ```
-Below is the complete running code sample of the above functionality.
-
+<!-
<br/>- <iframe height='500' scrolling='no' title='Adding a rotate control' src='//codepen.io/azuremaps/embed/GBEoRb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/GBEoRb/'>Adding a rotate control</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## A Map with all controls
map.controls.add([
}); ```
-The following code sample adds the zoom, compass, pitch, and style picker controls to the top-right corner of the map. Notice how they automatically stack. The order of the control objects in the script dictates the order in which they appear on the map. To change the order of the controls on the map, you can change their order in the array.
+The following image shows a map with the zoom, compass, pitch, and style picker controls in the top-right corner of the map. Notice how they automatically stack. The order of the control objects in the script dictates the order in which they appear on the map. To change the order of the controls on the map, you can change their order in the array.
-<br/>
+<!-
+<br/>
<iframe height='500' scrolling='no' title='A map with all the controls' src='//codepen.io/azuremaps/embed/qyjbOM/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qyjbOM/'>A map with all the controls</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
-The style picker control is defined by the [StyleControl](/javascript/api/azure-maps-control/atlas.control.stylecontrol) class. For more information on using the style picker control, see [choose a map style](choose-map-style.md).
+The style picker control is defined by the [StyleControl] class. For more information on using the style picker control, see [choose a map style].
## Customize controls
-Here is a tool to test out the various options for customizing the controls.
+The [Navigation Control Options] sample is a tool to test out the various options for customizing the controls.
-<br/>
+<!-
+<br/>
<iframe height="700" scrolling="no" title="Navigation control options" src="//codepen.io/azuremaps/embed/LwBZMx/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/LwBZMx/'>Navigation control options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
If you want to create customized navigation controls, create a class that extends from the `atlas.Control` class or create an HTML element and position it above the map div. Have this UI control call the maps `setCamera` function to move the map.
If you want to create customized navigation controls, create a class that extend
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Compass Control](/javascript/api/azure-maps-control/atlas.control.compasscontrol)
+> [CompassControl]
> [!div class="nextstepaction"]
-> [PitchControl](/javascript/api/azure-maps-control/atlas.control.pitchcontrol)
+> [PitchControl]
> [!div class="nextstepaction"]
-> [StyleControl](/javascript/api/azure-maps-control/atlas.control.stylecontrol)
+> [StyleControl]
> [!div class="nextstepaction"]
-> [ZoomControl](/javascript/api/azure-maps-control/atlas.control.zoomcontrol)
+> [ZoomControl]
See the following articles for full code: > [!div class="nextstepaction"]
-> [Add a pin](./map-add-pin.md)
+> [Add a pin]
> [!div class="nextstepaction"]
-> [Add a popup](./map-add-popup.md)
+> [Add a popup]
> [!div class="nextstepaction"]
-> [Add a line layer](map-add-line-layer.md)
+> [Add a line layer]
> [!div class="nextstepaction"]
-> [Add a polygon layer](map-add-shape.md)
+> [Add a polygon layer]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer.md)
+> [Add a bubble layer]
+
+[style picker]: choose-map-style.md
+[ZoomControl]: /javascript/api/azure-maps-control/atlas.control.zoomcontrol
+[PitchControl]: /javascript/api/azure-maps-control/atlas.control.pitchcontrol
+[CompassControl]: /javascript/api/azure-maps-control/atlas.control.compasscontrol
+[StyleControl]: /javascript/api/azure-maps-control/atlas.control.stylecontrol
+[Navigation Control Options]: https://samples.azuremaps.com/?search=Map%20Navigation%20Control%20Options&sample=map-navigation-control-options
+[choose a map style]: choose-map-style.md
+[Add a pin]: map-add-pin.md
+[Add a popup]: map-add-popup.md
+[Add a line layer]: map-add-line-layer.md
+[Add a polygon layer]: map-add-shape.md
+[Add a bubble layer]: map-add-bubble-layer.md
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Title: Add a bubble layer to an Azure Maps Power BI visual
description: In this article, you'll learn how to use the bubble layer in an Azure Maps Power BI visual. -+ Last updated 11/14/2022
The **Bubble layer** renders location data as scaled circles on the map.
Initially all bubbles have the same fill color. If a field is passed into the **Legend** bucket of the **Fields** pane, the bubbles will be colored based on their categorization. The outline of the bubbles is white be default but can be changed to a new color or by enabling the high-contrast outline option. The **High-contrast outline** option dynamically assigns an outline color that is a high-contrast variant of the fill color. This helps to ensure the bubbles are clearly visible regardless of the style of the map. The following are the primary settings in the **Format** pane that are available in the **Bubble layer** section.
-| Setting | Description |
-|--|-|
-| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. |
-| Fill color | Color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. |
-| Fill transparency | Transparency of each bubble. |
-| High-contrast outline | Makes the outline color contrast with the fill color for better accessibility by using a high-contrast variant of the fill color. |
-| Outline color | Color that outlines the bubble. This option is hidden when the **High-contrast outline** option is enabled. |
-| Outline transparency | Transparency of the outline. |
-| Outline width | Width of the outline in pixels. |
-| Blur | Amount of blur applied to the outline. A value of one blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect. |
-| Pitch alignment | Specifies how the bubbles look when the map is pitched. <br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Viewport - Bubbles appear on their edge on the map relative to viewport. (default)<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Map - Bubbles are rendered flat on the surface of the map. |
-| Zoom scale | Amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling. |
-| Min zoom | Minimum zoom level tiles are available. |
-| Max zoom | Maximum zoom level tiles are available. |
-| Layer position | Specifies the position of the layer relative to other map layers. |
+| Setting | Description |
+|-|-|
+| Size | The size of each bubble. This option is hidden when a field is passed into the **Size** bucket of the **Fields** pane. More options will appear as outlined in the [Bubble size scaling](#bubble-size-scaling) section further down in this article. |
+| Shape | Transparency. The fill transparency of each bubble. |
+| Color | Fill color of each bubble. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. |
+| Border | Settings for the border include color, width, transparency and blur.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Color specifies the color that outlines the bubble. This option is hidden when the **High-contrast outline** option is enabled.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Width specifies the width of the outline in pixels.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Transparency specifies the transparency of each bubble.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Blur specifies the amount of blur applied to the outline of the bubble. A value of one blurs the bubbles such that only the center point has no transparency. A value of 0 apply any blur effect.|
+| Zoom | Settings for the zoom property include scale, maximum and minimum.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Zoom scale is the amount the bubbles should scale relative to the zoom level. A zoom scale of one means no scaling. Large values will make bubbles smaller when zoomed out and larger when zoomed in. This helps to reduce the clutter on the map when zoomed out, yet ensures points stand out more when zoomed in. A value of 1 doesn't apply any scaling.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Maximum zoom level tiles are available.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Minimum zoom level tiles are available. |
+| Options | Settings for the options property include pitch alignment and layer position.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Pitch alignment specifies how the bubbles look when the map is pitched.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó layer position specifies the position of the layer relative to other map layers. |
## Bubble size scaling
The **Category labels** settings enable you to customize font setting such as fo
Change how your data is displayed on the map: > [!div class="nextstepaction"]
-> [Add a bar chart layer](power-bi-visual-add-3d-column-layer.md)
+> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
> [!div class="nextstepaction"] > [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
Customize the visual:
> [Tips and tricks for color formatting in Power BI](/power-bi/visuals/service-tips-and-tricks-for-color-formatting) > [!div class="nextstepaction"]
-> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
+> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Title: Get started with Azure Maps Power BI visual
-description: In this article, you'll learn how to use Azure Maps Power BI visual.
+description: This article discusses how to use Azure Maps Power BI visual.
-+ Last updated 11/29/2021
-# Get started with Azure Maps Power BI visual (Preview)
+# Get started with Azure Maps Power BI visual
**APPLIES TO:** ![Green check mark.](media/power-bi-visual/yes.png) Power BI service for ***consumers*** ![Green check mark.](media/power-bi-visual/yes.png) Power BI service for designers & developers ![Green check mark.](media/power-bi-visual/yes.png) Power BI Desktop ![X indicating no.](media/power-bi-visual/no.png) Requires Pro or Premium license
The Azure Maps Power BI visual connects to cloud service hosted in Azure to retr
- Data in the Location, Latitude, and Longitude buckets may be sent to Azure to retrieve map coordinates (a process called geocoding). - Telemetry data may be collected on the health of the visual (for example, crash reports), if the telemetry option in Power BI is enabled.
-Other than the scenarios described above, no other data overlaid on the map is sent to the Azure Maps servers. All rendering of data happens locally within the client.
+Other than the scenarios previously described, no other data overlaid on the map is sent to the Azure Maps servers. All rendering of data happens locally within the client.
-You, or your administrator, may need to update your firewall to allow access to the Azure Maps platform that uses the following URL.
+> [!TIP]
+> If using the Azure Maps [Geographic API endpoints], your firewall may need to be updated to allow access to the Azure Maps platform using either or all of the following URLs:
+>
+> - `https://atlas.microsoft.com`
+> - `https://us.atlas.microsoft.com`
+> - `https://eu.atlas.microsoft.com`
-> `https://atlas.microsoft.com`
-
-To learn more, about privacy and terms of use related to the Azure Maps Power BI visual see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/).
-
-## Azure Maps Power BI visual behavior and requirements
-
-There are a few considerations and requirements for the Azure Maps Power BI visual:
--- The Azure Maps Power BI visual must be enabled in Power BI Desktop. To enable Azure Maps Power BI visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual isn't available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.-- The data set must have fields that contain **latitude** and **longitude** information.
+For more information about privacy and terms of use related to the Azure Maps Power BI visual, see [Microsoft Azure Legal Information].
## Use the Azure Maps Power BI visual
+<!-
+Before you can use the Azure Maps visual in Power BI, you must select the **Use Azure Maps Visual** security option. To do this in Power BI desktop select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Security**, then select the **Use Azure Maps Visual** checkbox.
+->
Once the Azure Maps Power BI visual is enabled, select the **Azure Maps** icon from the **Visualizations** pane.
-Power BI creates an empty Azure Maps visual design canvas. While in preview, another disclaimer is displayed.
+Power BI creates an empty Azure Maps visual design canvas.
:::image type="content" source="media/power-bi-visual/visual-initial-load.png" alt-text="A screenshot of Power BI desktop with the Azure Maps visual loaded in its initial state." lightbox="media/power-bi-visual/visual-initial-load.png"::: Take the following steps to load the Azure Maps visual:
-1. In the **Fields** pane, drag data fields that contain latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets. This is the minimal data needed to load the Azure Maps visual.
+1. Performing one of the two following actions in the **Fields** pane provides the minimal data needed to load the Azure Maps visual:
+ 1. Drag data fields containing latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets.
+ 1. Drag data fields containing geospatial data to the **Location** bucket.
:::image type="content" source="media/power-bi-visual/bubble-layer.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields are provided." lightbox="media/power-bi-visual/bubble-layer.png":::
Take the following steps to load the Azure Maps visual:
:::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored bubbles on the map after legend field is provided." lightbox="media/power-bi-visual/bubble-layer-with-legend-color.png":::
+<!
> [!NOTE] > The built-in legend control for Power BI does not currently appear in this preview.
+>
3. To scale the data relatively, drag a measure into the **Size** bucket of the **Fields** pane. In this example, we're using **Sales** column.
- :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored and scaled bubbles on the map demonstrating the size field." lightbox="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png":::
+ :::image type="content" source="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png" alt-text="A screenshot of the Azure Maps visual displaying points as colored and scaled bubbles on the map that demonstrate the size field." lightbox="media/power-bi-visual/bubble-layer-with-legend-color-and-size.png":::
-4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as above, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
+4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as shown previously, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
:::image type="content" source="media/power-bi-visual/bubble-layer-styled.png" alt-text="A screenshot of the Azure Maps visual displaying points as bubbles on the map with a custom style." lightbox="media/power-bi-visual/bubble-layer-styled.png":::
The following data buckets are available in the **Fields** pane of the Azure Map
| Field | Description | |--|--|
+| Location | Used to enter easily understandable geographical data such as country, state, and city. |
| Latitude | The field used to specify the latitude value of the data points. Latitude values should be between -90 and 90 in decimal degrees format. | | Longitude | The field used to specify the longitude value of the data points. Longitude values should be between -180 and 180 in decimal degrees format. |
-| Legend | The field used to categorize the data and assign a unique color for data points in each category. When this bucket is filled, a **Data colors** section will appear in the **Format** pane that allows adjustments to the colors. |
+| Legend | The field used to categorize the data and assign a unique color for data points in each category. When this bucket is filled, a **Data colors** section appears in the **Format** pane that allows adjustments to the colors. |
| Size | The measure used for relative sizing of data points on the map. | | Tooltips | Other data fields to display in tooltips when shapes are hovered. |
The following data buckets are available in the **Fields** pane of the Azure Map
The **Map settings** section of the **Format** pane provide options for customizing how the map is displayed and reacts to updates.
-The **Map settings** section is divided into three subsections: [style](#style), [view](#view) and [controls](#controls).
+The **Map settings** section is divided into three subsections: [style], [view] and [controls].
### Style
The following settings are available in the **Style** section:
| Setting | Description | |-|--|
-| Style | The style of the map. The dropdown list contains [greyscale light][gs-light], [greyscale dark][gs-dark], [night][night], [road shaded relief][RSR], [satellite][satellite] and [satellite road labels][satellite RL]. |
+| Style | The style of the map. The dropdown list contains [blank and blank accessible], [grayscale dark], [grayscale light], [high contrast dark], [high contrast light], [night], [road], [road shaded relief], [satellite] and [satellite road labels]. |
| Show labels | A toggle switch that enables you to either show or hide map labels. For more information, see list item number five in the previous section. | ### View
The following settings available in the **View** section enable the user to spec
| Setting | Description | |||
-| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When **Auto zoom** is set to **Off**, the remaining settings in this section become active that enable to user to define the default map view. |
+| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map updates its position accordingly. When **Auto zoom** is set to **Off**, the remaining settings in this section become active that enable to user to define the default map view. |
| Zoom | The default zoom level of the map. Can be a number between 0 and 22. | | Center latitude | The default latitude of the center of the map. | | Center longitude | The default longitude of the center of the map. |
The following settings are available in the **Controls** section:
|--|--| | World wrap | Allows the user to pan the map horizontally infinitely. | | Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
-| Navigation | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
+| Navigation | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map] for details on all the different ways users can navigate the map. |
| Selection | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. To complete drawing a polygon; select the first point, or double-click on the last point on the map, or press the `c` key. | | Geocoding culture | The default, **Auto**, refers to the Western Address System. The only other option, **JA**, refers to the Japanese address system. In the western address system, you begin with the address details and then proceed to the larger categories such as city, state and postal code. In the Japanese address system, the larger categories are listed first and finish with the address details. |
At this time, Azure Maps is currently available in all countries and regions exc
- South Korea - Azure Government (GCC + GCC High)
-For coverage details for the different Azure Maps services that power this visual, see the [Geographic coverage information](geographic-coverage.md) document.
+For coverage details for the different Azure Maps services that power this visual, see [Geographic coverage information].
**Which web browsers are supported by the Azure Maps Power BI visual?**
-See this documentation for information on [Azure Maps Web SDK supported browsers](supported-browsers.md).
+For a list of supported browsers, see [Azure Maps Web SDK supported browsers].
**How many data points can I visualize?**
This visual supports up to 30,000 data points.
**Can addresses or other location strings be used in this visual?**
-The initial preview of this visual only supports latitude and longitude values in decimal degrees. A future update will add support for addresses and other location strings.
+Yes, addresses and other location strings can be used in the Azure Maps Power BI visual. For more information on addresses and other location strings, see [The location field] in the *Geocoding in Azure Maps Power BI Visual* article.
## Next steps Learn more about the Azure Maps Power BI visual: > [!div class="nextstepaction"]
-> [Understanding layers in the Azure Maps Power BI visual](power-bi-visual-understanding-layers.md)
+> [Understanding layers in the Azure Maps Power BI visual]
> [!div class="nextstepaction"]
-> [Manage the Azure Maps Power BI visual within your organization](power-bi-visual-manage-access.md)
+> [Manage the Azure Maps Power BI visual within your organization]
Customize the visual: > [!div class="nextstepaction"]
-> [Tips and tricks for color formatting in Power BI](/power-bi/visuals/service-tips-and-tricks-for-color-formatting)
+> [Tips and tricks for color formatting in Power BI]
> [!div class="nextstepaction"]
-> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
-
-[gs-light]: supported-map-styles.md#grayscale_light
-[gs-dark]: supported-map-styles.md#grayscale_dark
-[night]:supported-map-styles.md#night
-[RSR]: supported-map-styles.md#road_shaded_relief
+> [Customize visualization titles, backgrounds, and legends]
+
+[Geographic API endpoints]: geographic-scope.md#geographic-api-endpoint-mapping
+[Azure Maps Web SDK supported browsers]: supported-browsers.md
+[controls]: #controls
+[Customize visualization titles, backgrounds, and legends]: /power-bi/visuals/power-bi-visualization-customize-title-background-and-legend
+[Geographic coverage information]: geographic-coverage.md
+[style]: #style
+<!- Styles -->
+[blank and blank accessible]: supported-map-styles.md#blank-and-blank_accessible
+[grayscale dark]: supported-map-styles.md#grayscale_dark
+[grayscale light]: supported-map-styles.md#grayscale_light
+[high contrast dark]: supported-map-styles.md#high_contrast_dark
+[high contrast light]: supported-map-styles.md#high_contrast_light
+[night]: supported-map-styles.md#night
+[road]: supported-map-styles.md#road
+[road shaded relief]: supported-map-styles.md#road_shaded_relief
[satellite]: supported-map-styles.md#satellite
-[satellite RL]: supported-map-styles.md#satellite_road_labels
+[satellite road labels]: supported-map-styles.md#satellite_road_labels
++
+[Manage the Azure Maps Power BI visual within your organization]: power-bi-visual-manage-access.md
+[Microsoft Azure Legal Information]: https://azure.microsoft.com/support/legal/
+[Navigating the map]: map-accessibility.md#navigating-the-map
+[Tips and tricks for color formatting in Power BI]: /power-bi/visuals/service-tips-and-tricks-for-color-formatting
+[Understanding layers in the Azure Maps Power BI visual]: power-bi-visual-understanding-layers.md
+[view]: #view
+[The location field]: power-bi-visual-geocode.md#the-location-field
azure-maps Power Bi Visual Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md
Title: Manage Azure Maps Power BI visual within your organization
description: In this article, you will learn how to manage Azure Maps Power BI visual within your organization. -+ Last updated 11/29/2021
Power BI provides the ability for designers and tenant administrators to manage the use of the Azure Maps visual. Below you will find steps each role can take.
-## Designer options
-
-In Power BI Desktop, designers can disable the Azure Maps visual on the security tab. Select **File** &gt; **Options and settings** and then select **Options** &gt; **Preview features**. When disabled, Azure Maps will not load by default.
-- ## Tenant admin options In PowerBI.com, tenant administrators can turn off the Azure Maps visual for all users. Select **Settings** &gt; **Admin** **Portal** &gt; **Tenant settings**. When disabled, Power BI will no longer display the Azure Maps visual in the visualizations pane.
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
+
+ Title: Release notes - Spatial IO Module
+
+description: Release notes for the Azure Maps Spatial IO Module.
++ Last updated : 5/23/2023+++++
+# Spatial IO Module release notes
+
+This document contains information about new features and other changes to the Azure Maps Spatial IO Module.
+
+## [0.1.4]
+
+### Bug fixes (0.1.4)
+
+- make sure parsed geojson features (from KML) are always assigned with valid IDs
+
+- unescape XML &amp; that otherwise breaks valid urls
+
+- handles empty `<Icon><\Icon>` inside KMLReader
+
+## Next steps
+
+Explore samples showcasing Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Spatial IO Samples]
+
+Stay up to date on Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Blog]
+
+[0.1.4]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.4
+[Azure Maps Spatial IO Samples]: https://samples.azuremaps.com/?search=Spatial%20IO%20Module
+[Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Resource type |Dimensions supported |Multi-resource alerts| Metrics available| |||--|-|
-|Microsoft.Aadiam/azureADMetrics | Yes | No | Azure Active Directory (metrics in private preview) |
|Microsoft.ApiManagement/service | Yes | No | [Azure API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) | |Microsoft.App/containerApps | Yes | No | Azure Container Apps | |Microsoft.AppConfiguration/configurationStores |Yes | No | [Azure App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) |
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we
## Help and support
+### Azure technical support
+
+For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+ ### Microsoft Q&A questions forum Post general questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html).
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
These plugins provide extra functionality and integration with the specific fram
The React plug-in for the Application Insights JavaScript SDK enables: -- Tracking of route changes.-- React components usage statistics.
+- Tracking of router changes
+- React components usage statistics
### Get started
Install the npm package:
```bash
-npm install @microsoft/applicationinsights-react-js @microsoft/applicationinsights-web --save
+npm install @microsoft/applicationinsights-react-js
```
Check out the [Application Insights React demo](https://github.com/microsoft/app
### React Native plugin for Application Insights JavaScript SDK
-The React Native plugin for Application Insights JavaScript SDK collects device information, by default this plugin automatically collects:
+The React Native plugin for Application Insights JavaScript SDK collects device information. By default, this plugin automatically collects:
- **Unique Device ID** (Also known as Installation ID.)-- **Device Model Name** (Such as iPhone X, Samsung Galaxy Fold, Huawei P30 Pro etc.)
+- **Device Model Name** (Such as iPhone XS, Samsung Galaxy Fold, Huawei P30 Pro etc.)
- **Device Type** (For example, handset, tablet, etc.) ### Requirements
-You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/), therefore it doesn't work with Create React Native App.
+You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
### Getting started
-Install and link the [react-native-device-info](https://www.npmjs.com/package/react-native-device-info) package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
+By default, this plugin relies on the [`react-native-device-info` package](https://www.npmjs.com/package/react-native-device-info). You must install and link to this package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app.
+
+Since v3, support for accessing the DeviceInfo has been abstracted into an interface `IDeviceInfoModule` to enable you to use / set your own device info module. This interface uses the same function names and result `react-native-device-info`.
```zsh
appInsights.loadAppInsights();
```
+#### Disabling automatic device info collection
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+var RNPlugin = new ReactNativePlugin();
+var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ disableDeviceCollection: true,
+ extensions: [RNPlugin]
+ }
+});
+appInsights.loadAppInsights();
+```
+
+#### Using your own device info collection class
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+
+// Simple inline constant implementation
+const myDeviceInfoModule = {
+ getModel: () => "deviceModel",
+ getDeviceType: () => "deviceType",
+ // v5 returns a string while latest returns a promise
+ getUniqueId: () => "deviceId", // This "may" also return a Promise<string>
+};
+
+var RNPlugin = new ReactNativePlugin();
+RNPlugin.setDeviceInfoModule(myDeviceInfoModule);
+
+var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ extensions: [RNPlugin]
+ }
+});
+
+appInsights.loadAppInsights();
+```
+
+### IDeviceInfoModule
+
+Interface to abstract how the plugin can access the Device Info. This interface is a stripped down version of the `react-native-device-info` interface and is mostly supplied for testing.
+
+```typescript
+export interface IDeviceInfoModule {
+ /**
+ * Returns the Device Model
+ */
+ getModel: () => string;
+
+ /**
+ * Returns the device type
+ */
+ getDeviceType: () => string;
+
+ /**
+ * Returns the unique Id for the device. To support both the current version and previous
+ * versions react-native-device-info, this may return either a `string` or `Promise<string>`.
+ * When a promise is returned, the plugin will "wait" for the promise to `resolve` or `reject`
+ * before processing any events. This WILL cause telemetry to be BLOCKED until either of these
+ * states, so when returning a Promise, it MUST `resolve` or `reject`. Tt can't just never resolve.
+ * There is a default timeout configured via `uniqueIdPromiseTimeout` to automatically unblock
+ * event processing when this issue occurs.
+ */
+ getUniqueId: () => Promise<string> | string;
+}
+```
+
+If events are getting "blocked" because the `Promise` returned via `getUniqueId` is never resolved / rejected, you can call `setDeviceId()` on the plugin to "unblock" this waiting state. There is also an automatic timeout configured via `uniqueIdPromiseTimeout` (defaults to 5 seconds), which will internally call `setDeviceId()` with any previously configured value.
+ ### Enable Correlation Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
The Angular plugin for the Application Insights JavaScript SDK, enables:
Install npm package: ```bash
-npm install @microsoft/applicationinsights-angularplugin-js @microsoft/applicationinsights-web --save
+npm install @microsoft/applicationinsights-angularplugin-js
``` ### Basic usage Set up an instance of Application Insights in the entry component in your app: + [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+> [!IMPORTANT]
+> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
+ ```js import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
export class AppComponent {
} ```
-To track uncaught exceptions, setup ApplicationinsightsAngularpluginErrorService in `app.module.ts`:
+To track uncaught exceptions, set up ApplicationinsightsAngularpluginErrorService in `app.module.ts`:
+
+> [!IMPORTANT]
+> When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
```js import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js';
import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applica
export class AppModule { } ```
+To chain more custom error handlers, create custom error handlers that implement IErrorService:
+
+```javascript
+import { IErrorService } from '@microsoft/applicationinsights-angularplugin-js';
+
+export class CustomErrorHandler implements IErrorService {
+ handleError(error: any) {
+ ...
+ }
+}
+```
+
+And pass errorServices array through extensionConfig:
+
+```javascript
+extensionConfig: {
+ [angularPlugin.identifier]: {
+ router: this.router,
+ error
+ }
+ }
+```
+ ### Enable Correlation Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me
In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
-```dotnetcli
-dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
-```
+1. Install the latest [OpenTelemetry.Extensions.AzureMonitor](https://www.nuget.org/packages/OpenTelemetry.Extensions.AzureMonitor) package:
+ ```dotnetcli
+ dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
+ ```
-```csharp
-var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .SetSampler(new ApplicationInsightsSampler(new ApplicationInsightsSamplerOptions { SamplingRatio = 1.0F }))
- .AddAzureMonitorTraceExporter();
-```
+1. Add the following code snippet.
+ ```csharp
+ var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetSampler(new ApplicationInsightsSampler(new ApplicationInsightsSamplerOptions { SamplingRatio = 0.1F }))
+ .AddAzureMonitorTraceExporter();
+ ```
#### [Java](#tab/java)
We support the credential classes provided by [Azure Identity](https://github.co
- We recommend `ClientSecretCredential` for service principals. - Provide the tenant ID, client ID, and client secret to the constructor.
-```csharp
-var builder = WebApplication.CreateBuilder(args);
+1. Install the latest [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) package:
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. Provide the desired credential class:
+ ```csharp
+ var builder = WebApplication.CreateBuilder(args);
-builder.Services.AddOpenTelemetry().UseAzureMonitor(options => {
- options.Credential = new DefaultAzureCredential();
-});
+ builder.Services.AddOpenTelemetry().UseAzureMonitor(options => {
+ options.Credential = new DefaultAzureCredential();
+ });
-var app = builder.Build();
+ var app = builder.Build();
-app.Run();
-```
+ app.Run();
+ ```
#### [.NET](#tab/net)
We support the credential classes provided by [Azure Identity](https://github.co
- We recommend `ClientSecretCredential` for service principals. - Provide the tenant ID, client ID, and client secret to the constructor.
-```csharp
-var credential = new DefaultAzureCredential();
+1. Install the latest [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) package:
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
-var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddAzureMonitorTraceExporter(options =>
- {
- options.Credential = credential;
- });
+1. Provide the desired credential class:
+ ```csharp
+ var credential = new DefaultAzureCredential();
-var metricsProvider = Sdk.CreateMeterProviderBuilder()
- .AddAzureMonitorMetricExporter(options =>
- {
- options.Credential = credential;
- });
+ var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAzureMonitorTraceExporter(options =>
+ {
+ options.Credential = credential;
+ });
-var loggerFactory = LoggerFactory.Create(builder =>
-{
- builder.AddOpenTelemetry(options =>
- {
- options.AddAzureMonitorLogExporter(options =>
+ var metricsProvider = Sdk.CreateMeterProviderBuilder()
+ .AddAzureMonitorMetricExporter(options =>
{ options.Credential = credential; });+
+ var loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry(options =>
+ {
+ options.AddAzureMonitorLogExporter(options =>
+ {
+ options.Credential = credential;
+ });
+ });
});
-});
-```
+ ```
#### [Java](#tab/java)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetr
describes the instruments and provides examples of when you might use each one. > [!TIP]
-> The histogram is the most versatile and most closely equivalent to the Application Insights Track Metric [Classic API](api-custom-events-metrics.md). Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+> The histogram is the most versatile and most closely equivalent to the Application Insights GetMetric [Classic API](api-custom-events-metrics.md). Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
#### Histogram Example
public class Program
{ using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
+ .AddAzureMonitorMetricExporter()
.Build(); Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
public class Program
{ using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
+ .AddAzureMonitorMetricExporter()
.Build(); Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
public class Program
{ using var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
+ .AddAzureMonitorMetricExporter()
.Build(); var process = Process.GetCurrentProcess();
to draw attention in relevant experiences including the failures section and end
#### [ASP.NET Core](#tab/aspnetcore)
-```csharp
-using (var activity = activitySource.StartActivity("ExceptionExample"))
-{
- try
- {
- throw new Exception("Test exception");
- }
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
-}
-```
+- To log an Exception using an Activity:
+ ```csharp
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
+ ```
+- To log an Exception using ILogger:
+ ```csharp
+ var logger = loggerFactory.CreateLogger(logCategoryName);
+
+ try
+ {
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
#### [.NET](#tab/net)
-```csharp
-using (var activity = activitySource.StartActivity("ExceptionExample"))
-{
- try
- {
- throw new Exception("Test exception");
- }
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
-}
-```
+- To log an Exception using an Activity:
+ ```csharp
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
+ ```
+- To log an Exception using ILogger:
+ ```csharp
+ var logger = loggerFactory.CreateLogger("ExceptionExample");
+
+ try
+ {
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
#### [Java](#tab/java)
For code representing a background job not captured by an instrumentation librar
```csharp using var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddSource("ActivitySourceName")
- .AddAzureMonitorTraceExporter(o => o.ConnectionString = "<Your Connection String>")
+ .AddAzureMonitorTraceExporter()
.Build(); var activitySource = new ActivitySource("ActivitySourceName");
To add span attributes, use either of the following two ways:
using var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddSource("OTel.AzureMonitor.Demo") .AddProcessor(new ActivityEnrichingProcessor())
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>"
- })
+ .AddAzureMonitorTraceExporter()
.Build(); ```
You might use the following ways to filter out telemetry before it leaves your a
using var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddSource("OTel.AzureMonitor.Demo") .AddProcessor(new ActivityFilteringProcessor())
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>"
- })
+ .AddAzureMonitorTraceExporter()
.Build(); ```
Get the request trace ID and the span ID in your code:
### [ASP.NET Core](#tab/aspnetcore)
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. - For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). #### [.NET](#tab/net)
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. - For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). ### [Java](#tab/java) -- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md). - For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md).
- For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly. - For a list of open issues related to Azure Monitor Java Autoinstrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues). ### [Node.js](#tab/nodejs)
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly. - For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). ### [Python](#tab/python)
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
- For OpenTelemetry issues, contact the [OpenTelemetry Python community](https://github.com/open-telemetry/opentelemetry-python) directly. - For a list of open issues related to Azure Monitor Distro, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Python/issues/new).
azure-monitor Azure Monitor Workspace Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-private-endpoint.md
Last updated 05/03/2023
Use [private endpoints](../../private-link/private-endpoint-overview.md) for Managed Prometheus and your Azure Monitor workspace to allow clients on a virtual network (VNet) to securely query data over a [Private Link](../../private-link/private-link-overview.md). The private endpoint uses a separate IP address within the VNet address space of your Azure Monitor workspace resource. Network traffic between the clients on the VNet and the workspace resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet. > [!NOTE]
-> Configuration of [Private Link for ingestion of data into Managed Prometheus and your Azure Monitor workspace](private-link-data-ingestion.md) is done on the Data Collection Endpoints associated with your workspace.
+> If you are using Azure Managed Grafana to query your data, please configure a [Managed Private Endpoint](https://aka.ms/ags/mpe) to ensure the queries from Managed Grafana into your Azure Monitor workspace use the Microsoft backbone network without going through the internet.
Using private endpoints for your workspace enables you to:
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The latest values for Microsoft Purview quotas can be found in the [Microsoft Pu
[!INCLUDE [sentinel-service-limits](../../sentinel/includes/sentinel-limits-threat-intelligence.md)]
+## TI upload indicators API limits
++ ### User and Entity Behavior Analytics (UEBA) limits [!INCLUDE [sentinel-service-limits](../../sentinel/includes/sentinel-limits-ueba.md)]
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> | sharedvmextensions | No | No | No | > | sharedvmimages | No | No | No | > | sharedvmimages / versions | No | No | No |
-> | snapshots | **Yes** - Full <br> No - Incremental | **Yes** - Full <br> No - Incremental | No - Full <br> No - Incremental |
+> | snapshots | **Yes** - Full <br> No - Incremental | **Yes** - Full <br> No - Incremental | No - Full <br> **Yes** - Incremental |
> | sshpublickeys | No | No | No | > | virtualmachines | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. | > | virtualmachines / extensions | **Yes** | **Yes** | No |
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 05/24/2023 Last updated : 05/25/2023
Archive tier supports the following clients:
### Supported regions
-| Workloads | Generally available |
-| | | |
-| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. |
-| Azure Virtual Machines | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West, UAE North. |
+| Supported workload | Supported region |
+| | |
+| **Azure VMs**, **SQL Server in Azure VMs**, **SAP HANA in Azure VMs** | Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US 2, East US, France Central, Germany West Central, Central India, South India, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, Norway East, South Central US, South East Asia, UAE North, UK South, UK West, West Central US, West Europe, West US 2, West US, US Gov Arizona, US Gov Virginia, US Gov Texas, China North 2, China East 2 |
## How Azure Backup moves recovery points to the Vault-archive tier?
backup Backup Azure Dataprotection Use Rest Api Restore Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-restore-disks.md
Title: Restore Azure Disks using Azure Data Protection REST API description: In this article, learn how to restore Azure Disks using Azure Data protection REST API.- Previously updated : 10/06/2021+ Last updated : 05/25/2023 ms.assetid: 30f4e7ff-2a55-4a85-be44-55feedc24607
This article describes how to restore [disks](disk-backup-overview.md) using Azure Backup.
+Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using incremental snapshots with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
+ >[!Note] >- Currently, the Original-Location Recovery (OLR) option to restore by replacing the existing source disk (from where the backups were taken) isn't supported. >- You can restore from a recovery point to create a new disk in the same resource group of the source disk or in any other resource group. It's known as Alternate-Location Recovery (ALR).
-In this article, you'll learn how to:
--- Restore to create a new disk--- Track the restore operation status- ## Prerequisites - [Create a Backup vault](backup-azure-dataprotection-use-rest-api-create-update-backup-vault.md)
Once you submit the *GET* request, this returns response as 200 (OK) and the lis
|200 OK | [AzureBackupRecoveryPointResourceList](/rest/api/dataprotection/recovery-points/list#azurebackuprecoverypointresourcelist) | OK | |Other Status codes | [CloudError](/rest/api/dataprotection/recovery-points/list#clouderror) | Error response describes the reason for the operation failure. |
-##### Example response for list of recovery points
+**Example response for list of recovery points**
```http HTTP/1.1 200 OK
X-Powered-By: ASP.NET
} ```
-Select the relevant recovery points from the above list and proceed to prepare the restore request. We'll choose a recovery point named _a3d02fc3ab8a4c3a8cc26688c26d3356_ from the above list to restore.
+Select the relevant recovery points from the above list and proceed to prepare the restore request. We'll choose a recovery point named `a3d02fc3ab8a4c3a8cc26688c26d3356` from the above list to restore.
### Prepare the restore request Construct the Azure Resource Manager (ARM) ID of the new disk to be created with the target resource group (to which permissions were assigned as detailed [above](#set-up-permissions)) and the required disk name.
-For example, we'll use a disk named _APITestDisk2_, under a resource group _targetrg_, present in the same region as the backed-up disk, but under a different subscription.
+For example, we'll use a disk named `APITestDisk2`, under a resource group `targetrg`, present in the same region as the backed-up disk, but under a different subscription.
#### Construct the request body for restore request
We have constructed a section of the same in the [above section](#construct-the-
The _validate restore request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation completes.
|Name |Type |Description | |||| |200 OK | | Status of validate request | |202 Accepted | | Accepted |
-###### Example response to restore validate request
+**Example response to restore validate request**
-Once the *POST* operation is submitted, it'll return the initial response as 202 (Accepted) with an _Azure-asyncOperation_ header.
+Once the *POST* operation is submitted, it will return the initial response as 202 (Accepted) with an _Azure-asyncOperation_ header.
```http HTTP/1.1 202 Accepted
Location: https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxx
X-Powered-By: ASP.NET ```
-Track the _Azure-AsyncOperation_ header with a simple *GET* request. When the request is successful, it returns 200 (OK) with a success status response.
+Track the _Azure-AsyncOperation_ header with a *GET* request. When the request is successful, it returns 200 (OK) with a success status response.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExOzVlNzMxZDBiLTQ3MDQtNDkzNS1hYmNjLWY4YWEzY2UzNTk1ZQ==?api-version=2021-01-01
The only change from the validate restore request body is to remove the _restore
The _trigger restore request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation completes.
|Name |Type |Description | |||| |200 OK | | Status of restore request | |202 Accepted | | Accepted |
-##### Example response to trigger restore request
+**Example response to trigger restore request**
-Once the *POST* operation is submitted, it'll return the initial response as 202 (Accepted) with an _Azure-asyncOperation_ header.
+Once the *POST* operation is submitted, it will return the initial response as 202 (Accepted) with an _Azure-asyncOperation_ header.
```http HTTP/1.1 202 Accepted
Location: https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxx
X-Powered-By: ASP.NET ```
-Track the _Azure-AsyncOperation_ header with a simple *GET* request. When the request is successful, it'll return 200 (OK) with a job ID that should be further tracked for completion of restore request.
+Track the _Azure-AsyncOperation_ header with a *GET* request. When the request is successful, it will return 200 (OK) with a job ID that should be further tracked for completion of restore request.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExO2Q1NDIzY2VjLTczYjYtNDY5ZC1hYmRjLTc1N2Q0ZTJmOGM5OQ==?api-version=2021-01-01
GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
The _trigger restore requests_ triggered the restore job. To track the resultant Job ID, use the [GET Jobs API](/rest/api/dataprotection/jobs/get).
-Use the simple GET command to track the _JobId_ present in the [trigger restore response](#example-response-to-trigger-restore-request) above.
+Use the *GET* command to track the _JobId_ present in the [trigger restore response](#response-to-trigger-restore-requests) above.
```http GET /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/TestBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/testBkpVault/backupJobs/c4bd49a1-0645-4eec-b207-feb818962852?api-version=2021-01-01
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 05/25/2023
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Bastion so that I can securely connect to my Azure virtual machines. Previously updated : 05/04/2023 Last updated : 05/18/2023
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. ## <a name="key"></a>Key benefits
RDP and SSH are some of the fundamental means through which you can connect to y
Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies. This figure shows the architecture of an Azure Bastion deployment. In this diagram:
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
description: Learn how to connect to a VM from a Windows computer by using Basti
Previously updated : 12/05/2022 Last updated : 05/18/2023
This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client. + Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported. > [!NOTE]
After you deploy this feature, there are two different sets of connection instru
* Use native clients on *non*-Windows local computers (example: a Linux PC). * Use the native client of your choice. (This includes the Windows native client.)
- * Connect using SSH or RDP. (Note that bastion tunnel does not relay web servers or hosts.)
+ * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
* Set up concurrent VM sessions with Bastion. * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
Use the example that corresponds to the type of target VM to which you want to c
az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" ```
-> [!IMPORTANT]
-> Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
+ > [!IMPORTANT]
+ > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
**SSH:**
Use the example that corresponds to the type of target VM to which you want to c
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" ```
-1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+ Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
### <a name="connect-linux"></a>Connect to a Linux VM
Use the example that corresponds to the type of target VM to which you want to c
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>" ```
-1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+ 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
## <a name="connect-tunnel"></a>Connect to VM - other native clients
-This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. Note that bastion tunnel supports RDP/SSH connection but does not relay web servers or hosts.
+This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts.
This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md).
cognitive-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md
To use a custom model, don't use the features query parameter. Instead, set the
You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language.
-Language option only applies when you are using the standard model.
+Language option only applies when you're using the standard model.
#### [C#](#tab/csharp)
A populated URL might look like this:
If you're extracting captions or dense captions, you can ask for gender neutral captions. Gender neutral captions is optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
-Gender neurtal caption option only applies when you are using the standard model.
+Gender neutral caption option only applies when you're using the standard model.
#### [C#](#tab/csharp)
A populated URL might look like this:
An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
-Smart cropping aspect rations only applies when you are using the standard model.
+Smart cropping aspect rations only applies when you're using the standard model.
#### [C#](#tab/csharp)
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
For more details, see [Operation IDs](#operation-ids) later in this guide.
In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation the following three properties are added: - The `displayFormWordLevelTimestampsEnabled` property can be used to enable the reporting of word-level timestamps on the display form of the transcription results. The results are returned in the `displayWords` property of the transcription file. -- The `diarization` property can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers. The `diarizationEnabled` property is deprecated and will be removed in the next major version of the API.
+- The `diarization` property can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers. To use this property you must also set the `diarizationEnabled` property to `true`.
- The `languageIdentification` property can be used specify settings for language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. The returned transcription will include a new `locale` property for the recognized language or the locale that you provided. The `filter` property is added to the [Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List), [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles), and [Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions) operations. The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, and `locale`. For example: `filter=createdDateTime gt 2022-02-01T11:00:00Z`
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md
Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co
* When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token. * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations.
- * The value for the expiry time is a maximum of seven days from the creation of the SAS token.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information *see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : |
+| Switzerland | Toll-Free | - | - | Public Preview | Public Preview\* |
| Switzerland | Local | - | - | Public Preview | Public Preview\* | | Switzerland, Germany, Netherlands, United Kingdom, Australia, France, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : |
+| Luxembourg | Toll-Free | - | - | Public Preview | Public Preview\* |
| Luxembourg | Local | - | - | Public Preview | Public Preview\* | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : |
+| Netherlands | Toll-Free | - | - | Public Preview | Public Preview\* |
| Netherlands | Local | - | - | Public Preview | Public Preview\* | | Netherlands, Germany, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 3.00/mo |
+|Toll-Free |USD 20.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.2300/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.2300/min | USD 0.2800/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 1.50/mo |
+|Toll-Free |USD 25.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.3500/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.3500/min |Starting at USD 0.0770/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Previously updated : 07/24/2022 Last updated : 04/26/2023
Azure Communication Services provides a concept of a room for developers who are
Here are the main scenarios where rooms are useful: -- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. -- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. Developers can use the "Join Policy" for a room, to either let all or only a subset of users with assigned Communication Services identities to join a room call.
+- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services.
+- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call.
- **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference. ## When to use rooms
Here are the main scenarios where rooms are useful:
Use rooms when you need any of the following capabilities: - Control which users can join room calls. - Need scheduling/coordinates that are enabled and expire at a specified time and date.-- Need structured communication through roles and permissions for users.
+- Need structured communication through roles and permissions for users.
:::image type="content" source="../media/rooms/room-decision-tree.png" alt-text="Diagram showing decision tree to select a Room.":::
-| Capability | 1:N Call | 1:N Call <br>with ephemeral ID</br> | Room call |
+| Capability | 1:N Call | 1:N Call <br>with ephemeral ID</br> | Room call |
| | :: | :: | :: | | Interactive participants | 350 | 350 | 350 | | Ephemeral ID to distribute to participants | ❌ | ✔️ <br>(Group ID)</br> | ✔️ <br>(Room ID)</br> |
Use rooms when you need any of the following capabilities:
## Managing rooms and joining room calls **Rooms API/SDK** is used to accomplish actions such as creating a room, adding participants, and setting up schedule etc. Calling SDK is used to initiate the call within a Room from the client side. Most actions available in a one-to-one or group-calls in **Calling SDKs** are also available in room calls. Full list of capabilities offered in Calling SDK is listed in the [Calling SDK Overview](../voice-video-calling/calling-sdk-features.md#detailed-capabilities).
-
+ | Capability | Calling SDK | Rooms API/SDK | |-|--|--| | Join a room call with voice and video | ✔️ | ❌ |
-| List participants that joined the rooms call | ✔️ | ❌ |
+| List participants that joined the rooms call | ✔️ | ❌ |
| Create room | ❌ | ✔️ | | List all participants that are invited to the room | ❌ | ✔️ | | Add or remove a VoIP participant | ❌ | ✔️ |
Use rooms when you need any of the following capabilities:
The picture below illustrates the concept of managing and joining the rooms. :::image type="content" source="../media/rooms/rooms-join-call.png" alt-text="Diagram showing Rooms Management.":::
-
+ ### Rooms API/SDKs Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in your server application for `room` operations:-- Create
+- Create
- Modify - Delete - Set and update the list of participants - Set and modify the Room validity-- Control who gets to join a room, using `roomJoinPolicy`. Details below. - Assign roles and permissions to users. Details below. ### Calling SDKs
Use the [Calling SDKs](../voice-video-calling/calling-sdk-features.md) to join t
Rooms can also be accessed using the [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/docs/rooms--page). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code.
-## Control access to room calls
-
-Rooms can be set to operate in two levels of control over who is allowed to join a room call.
-
-| Room type | roomJoinPolicy value | Who can participate in Room?
-|-| | |
-| **Private Room** | `inviteOnly` | User must be explicitly added to the room roster, to be able to join a room |
-| **Open Room** | `communicationServiceUsers` | All valid users created under company's Azure Communication Service resource are allowed to join this room |
- ## Predefined participant roles and permissions
-Room participants can be assigned one of the following roles: **Presenter**, **Attendee** and **Consumer**. By default, a user is assigned an **Attendee** role, if no other role is assigned.
+Room participants can be assigned one of the following roles: **Presenter**, **Attendee** and **Consumer**. By default, a user is assigned an **Attendee** role, if no other role is assigned.
The tables below provide detailed capabilities mapped to the roles. At a high level, **Presenter** role has full control, **Attendee** capabilities are limited to audio and video, while **Consumer** can only receive audio, video and screen sharing.
The tables below provide detailed capabilities mapped to the roles. At a high le
## Event handling
-[Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) published via [Event Grid](../../../event-grid/event-schema-communication-services.md) are annotated with room call information.
+[Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) published via [Event Grid](../../../event-grid/event-schema-communication-services.md) are annotated with room call information.
- **CallStarted** is published when a room call starts. - **CallEnded** is published when a room call ends.
communication-services Connect Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/connect-email-communication-resource.md
Last updated 03/31/2023
+zone_pivot_groups: acs-js-csharp-java-python-portal-rest
# Quickstart: How to connect a verified email domain with Azure Communication Service resource
-In this quick start, you'll learn about how to connect a verified domain in Azure Communication Services to send email.
+In this quick start, you'll learn how to connect a verified domain in Azure Communication Services to send email.
-## Connect an email domain to a Communication Service Resource
-1. [Create a Communication Services Resources](../create-communication-resource.md) to connect to a verified domain.
-2. In the Azure Communication Service Resource overview page, click **Domains** on the left navigation panel under Email.
- :::image type="content" source="./media/email-domains.png" alt-text="Screenshot that shows the left navigation panel for linking Email Domains." lightbox="media/email-domains-expanded.png":::
-3. Select one of the options below
- - Click **Connect domain** in the upper navigation bar.
- - Click **Connect domain** in the splash screen.
-
- :::image type="content" source="./media/email-domains-connect.png" alt-text="Screenshot that shows how to connect one of your verified email domains." lightbox="media/email-domains-connect-expanded.png":::
-4. Select a one of the verified domains by filtering
- - Subscription
- - Resource Group
- - Email Service
- - Verified Domain
-
- :::image type="content" source="./media/email-domains-connect-select.png" alt-text="Screenshot that shows how to filter and select one of the verified email domains to connect." lightbox="media/email-domains-connect-select-expanded.png":::
-> [!Note]
-> We allow only connecting the domains in the same geography. Please ensure that Data location for Communication Resource and Email Communication Resource that was selected during resource creation are the same.
-5. Click Connect
-
- :::image type="content" source="./media/email-domains-connected.png" alt-text="Screenshot that shows one of the verified email domains is now connected." lightbox="media/email-domains-connected-expanded.png":::
-## Disconnect an email domain from the Communication Service Resource
-
-1. In the Azure Communication Service Resource overview page, click **Domains** on the left navigation panel under Email.
-2. Select the Connected Domains click the ... and click Disconnect.
-
- :::image type="content" source="./media/email-domains-connect-disconnect.png" alt-text="Screenshot that shows how to disconnect the connected domain." lightbox="media/email-domains-connect-disconnect-expanded.png":::
## Next steps
communication-services Number Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md
+
+ Title: Quickstart - Look up operator information for a phone number using Azure Communication Services
+description: Learn how to look up operator information for any phone number using Azure Communication Services
+++++ Last updated : 05/30/2023++++
+# Quickstart: Look up operator information for a phone number using Azure Communication Services
+
+Get started with the Phone Numbers client library for C# to look up operator information for phone numbers, which can be used to determine whether and how to communicate with that phone number. Follow these steps to install the package and look up operator information about a phone number.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- The latest version of [.NET Core client library](https://dotnet.microsoft.com/download/dotnet-core) for your operating system.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).
+
+### Prerequisite check
+
+- In a terminal or command window, run the `dotnet` command to check that the .NET SDK is installed.
+
+## Setting up
+
+To set up an environment for sending lookup queries, take the steps in the following sections.
+
+### Create a new C# application
+
+In a console window, such as cmd, PowerShell, or Bash, use the `dotnet new` command to create a new console app with the name `NumberLookupQuickstart`. This command creates a simple "Hello World" C# project with a single source file, **Program.cs**.
+
+```console
+dotnet new console -o NumberLookupQuickstart
+```
+
+Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
+
+```console
+cd NumberLookupQuickstart
+dotnet build
+```
+
+### Install the package
+
+While still in the application directory, install the Azure Communication Services PhoneNumbers client library for .NET package by using the following command.
+
+```console
+dotnet add package Azure.Communication.PhoneNumbers --version 1.0.0
+```
+
+Add a `using` directive to the top of **Program.cs** to include the `Azure.Communication` namespace.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Azure.Communication.PhoneNumbers;
+```
+
+Update `Main` function signature to be async.
+
+```csharp
+static async Task Main(string[] args)
+{
+ ...
+}
+```
+
+## Code examples
+
+### Authenticate the client
+
+Phone Number clients can be authenticated using connection string acquired from an Azure Communication Services resource in the [Azure portal][azure_portal].
+It's recommended to use a `COMMUNICATION_SERVICES_CONNECTION_STRING` environment variable to avoid putting your connection string in plain text within your code.
+
+```csharp
+// This code retrieves your connection string from an environment variable.
+string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING");
+
+PhoneNumbersClient client = new PhoneNumbersClient(connectionString, new PhoneNumbersClientOptions(PhoneNumbersClientOptions.ServiceVersion.V2023_05_01_Preview));
+```
+
+Phone Number clients can also authenticate with Azure Active Directory Authentication. With this option,
+`AZURE_CLIENT_SECRET`, `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` environment variables need to be set up for authentication.
+
+```csharp
+// Get an endpoint to our Azure Communication Services resource.
+Uri endpoint = new Uri("<endpoint_url>");
+TokenCredential tokenCredential = new DefaultAzureCredential();
+client = new PhoneNumbersClient(endpoint, tokenCredential);
+```
+
+### Look up operator information for a number
+
+To search for a phone number's operator information, call `SearchOperatorInformationAsync` from the `PhoneNumbersClient`.
+
+```csharp
+OperatorInformationResult searchResult = await client.SearchOperatorInformationAsync(new[] { "<target-phone-number>" });
+OperatorInformation operatorInformation = searchResult.Results[0];
+```
+
+Replace `<target-phone-number>` with the phone number you're looking up, usually a number you'd like to send a message to.
+
+> [!WARNING]
+> Provide phone numbers in E.164 international standard format, for example, +14255550123.
+
+### Use operator information
+
+You can now use the operator information. For this quickstart guide, we can print some of the details to the console.
+
+```csharp
+Console.WriteLine($"{operatorInformation.PhoneNumber} is a {operatorInformation.NumberType ?? "unknown"} number, operated by {operatorInformation.OperatorDetails.Name ?? "an unknown operator"}");
+```
+
+You may also use the operator information to determine whether to send an SMS. For more information on sending an SMS, see the [SMS Quickstart](../sms/send.md).
+
+## Run the code
+
+Run the application from your application directory with the `dotnet run` command.
+
+```console
+dotnet run
+```
+
+## Troubleshooting
+
+Common questions and issues:
+
+- The data returned by this endpoint is subject to various international laws and regulations, therefore the accuracy of the results depends on several factors. These factors include whether the number has been ported, the country code, and the approval status of the caller. Based on these factors, operator information may not be available for some phone numbers or may reflect the original operator of the phone number, not the current operator.
+
+## Next steps
+
+In this quickstart you learned how to:
+> [!div class="checklist"]
+> * Look up operator information for a phone number
+
+[!div class="nextstepaction"]
+[Send an SMS](../sms/send.md)
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Confidential VMs support the following VM sizes:
### OS support Confidential VMs support the following OS options:- - Ubuntu 20.04 LTS - Ubuntu 22.04 LTS-- Windows Server 2019-- Windows Server 2022
+- Windows Server 2019 Datacenter - x64 Gen 2
+- Windows Server 2019 Datacenter Server Core - x64 Gen 2
+- Windows Server 2022 Datacenter - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition Core - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2
+- Windows Server 2022 Datacenter Server Core - x64 Gen 2
+- Windows 11 Enterprise N, version 22H2 -x64 Gen 2
+- Windows 11 Pro, version 22H2 ZH-CN -x64 Gen 2
+- Windows 11 Pro, version 22H2 -x64 Gen 2
+- Windows 11 Pro N, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2
### Regions
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. For **Security Type**, select **Confidential virtual machines**.
- 1. For **Image**, select the OS image to use for your VM. For this tutorial, select **Ubuntu Server 20.04 LTS (Confidential VM)**, **Windows Server 2019 [Small disk] Data Center**, or **Windows Server 2022 [Small disk] Data Center**.
-
- > [!TIP]
- > Optionally, select **See all images** to open Azure Marketplace. Select the filter **Security Type** &gt; **Confidential** to show all available confidential VM images.
+ 1. For **Image**, select the OS image to use for your VM. Select **See all images** to open Azure Marketplace. Select the filter **Security Type** &gt; **Confidential** to show all available confidential VM images.
1. Toggle [Generation 2](../virtual-machines/generation-2.md) images. Confidential VMs only run on Generation 2 images. To ensure, under **Image**, select **Configure VM generation**. In the pane **Configure VM generation**, for **VM generation**, select **Generation 2**. Then, select **Apply**.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
It's not possible to resize a non-confidential VM to a confidential VM.
OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include: -- Ubuntu 20.04 Gen 2-- Ubuntu 22.04 Gen 2-- Windows Server 2019 Gen 2-- Windows Server 2022 Gen 2
+- Ubuntu 20.04 LTS
+- Ubuntu 22.04 LTS
+- Windows Server 2019 Datacenter - x64 Gen 2
+- Windows Server 2019 Datacenter Server Core - x64 Gen 2
+- Windows Server 2022 Datacenter - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition Core - x64 Gen 2
+- Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2
+- Windows Server 2022 Datacenter Server Core - x64 Gen 2
+- Windows 11 Enterprise N, version 22H2 -x64 Gen 2
+- Windows 11 Pro, version 22H2 ZH-CN -x64 Gen 2
+- Windows 11 Pro, version 22H2 -x64 Gen 2
+- Windows 11 Pro N, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise, version 22H2 -x64 Gen 2
+- Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2
For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
You can call other container apps in the same environment from your application
- default fully qualified domain name (FQDN) - a custom domain name-- the container app name
+- the container app name, for instance `https://<APP_NAME>` for internal requests
- a Dapr URL > [!NOTE]
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
In a multi-region Azure Cosmos DB account, changes in one region are available i
## Change feed modes
-There are two change feed modes available: latest version mode and all versions and deletes mode. The mode that change feed is read in determines which operations changes are captured from and the metadata available for each change. It's possible to consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container.
+There are two [change feed modes](./nosql/change-feed-modes.md) available: latest version mode and all versions and deletes mode. The mode that change feed is read in determines which operations changes are captured from and the metadata available for each change. It's possible to consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container.
### Latest version mode
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md
Let's dig into each one in detail.
|Respond to attacks|Once you have contacted Azure support to report a potential attack, a five-step incident response process is kicked off. The goal of the five-step process is to restore normal service security and operations. The five-step process restores services as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).| |Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, US Gov).| |Protected facilities|Data in Azure Cosmos DB is stored on SSDs in Azure's protected data centers.<br><br>Learn more in [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters)|
-|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.3 (included).<br>It's possible to enforce a minimum TLS level server-side. To do so, open an [Azure support ticket](https://azure.microsoft.com/support/options/).|
+|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.2 (included).<br>It's possible to enforce a minimum TLS level on server-side. To do so, refer to self service guide [Self-serve minimum TLS version enforcement in Azure Cosmos DB](./self-serve-minimum-tls-enforcement.md).|
|Encryption at rest|All data stored into Azure Cosmos DB is encrypted at rest. Learn more in [Azure Cosmos DB encryption at rest](./database-encryption-at-rest.md)| |Patched servers|As a managed database, Azure Cosmos DB eliminates the need to manage and patch servers, that's done for you, automatically.| |Administrative accounts with strong passwords|It's hard to believe we even need to mention this requirement, but unlike some of our competitors, it's impossible to have an administrative account with no password in Azure Cosmos DB.<br><br> Security via TLS and HMAC secret based authentication is baked in by default.|
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Some workloads shouldn't consider the integrated cache, including:
- Write-heavy workloads - Rarely repeated point reads or queries
+- Workloads reading the change feed
## Item cache
cosmos-db Change Feed Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md
Last updated 05/09/2023
There are two change feed modes in Azure Cosmos DB. Each mode offers the same core functionality with differences including the operations captured in the feed, metadata available for each change, and retention period of changes. You can consume the change feed in different modes across multiple applications for the same Azure Cosmos DB container to fit the requirements of each workload.
+> [!Note]
+> Do you have any feedback about change feed modes? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team: cosmoschangefeed@microsoft.com
+ ## Latest version change feed mode Latest version mode is a persistent record of changes to items from creates and updates. You get the latest version of each item in the container. For example, if an item is created and then updated before you read the change feed, only the updated version appears in the change feed. Deletes aren't captured as changes, and when an item is deleted it's no longer be available in the feed. Latest version change feed mode is enabled by default and is compatible with all Azure Cosmos DB accounts except API for Table and API for PostgreSQL. This mode was previously the default way to consume the change feed. ## All versions and deletes change feed mode (preview)
-All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts.
+All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](#getting-started).
## Change feed use cases
During the preview, the following methods to read the change feed are available
| **Method to read change feed** | **.NET** | **Java** | **Python** | **Node/JS** | | | | | | | | |
-| [Change feed pull model](change-feed-pull-model.md) | [>= 3.32.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.17.0-preview) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.37.0) | No | No |
+| [Change feed pull model](change-feed-pull-model.md) | [>= 3.32.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.32.0-preview) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.37.0) | No | No |
| [Change feed processor](change-feed-processor.md) | No | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | | Azure Functions trigger | No | No | No | No |
During the preview, the following methods to read the change feed are available
### Getting started
-To get started using all versions and deletes change feed mode, enroll in the preview via the [Preview Features page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. Search for the **All versions and deletes change feed mode** feature and select **Register**.
+To get started using all versions and deletes change feed mode, enroll in the preview via the [Preview Features page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. Search for the **AllVersionsAndDeletesChangeFeed** feature and select **Register**.
:::image type="content" source="media/change-feed-modes/enroll-in-preview.png" alt-text="Screenshot of All versions and deletes change feed mode feature in Preview Features page in Subscriptions overview in Azure portal.":::
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md
# Change feed pull model in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. As you can also do with the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
+With the change feed pull model, you can consume the Azure Cosmos DB change feed at your own pace. Similar to the [change feed processor](change-feed-processor.md), you can use the change feed pull model to parallelize the processing of changes across multiple change feed consumers.
## Comparing with change feed processor
-Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item (or batch of items) in the change feed.
+Many scenarios can process the change feed using either the [change feed processor](change-feed-processor.md) or the pull model. The pull model's continuation tokens and the change feed processor's lease container are both "bookmarks" for the last processed item, or batch of items, in the change feed.
However, you can't convert continuation tokens to a lease (or vice versa).
Here's some key differences between the change feed processor and pull model:
### [.NET](#tab/dotnet)
-To process the change feed using the pull model, create a `FeedIterator`. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that can be read from the change feed using that specific `FeedIterator`. You must also specify a required `ChangeFeedMode` value for the mode in which you want to process changes: [latest version](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Use `ChangeFeedMode.Incremental` for reading the change feed in latest version mode or `ChangeFeedMode.LatestVersion` in the preview NuGet package. If you're reading the change feed in all versions and deletes mode, you must select a change feed start from value of either `Now()` or from a specific continuation token.
+To process the change feed using the pull model, create a `FeedIterator`. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that can be read from the change feed using that specific `FeedIterator`. You must also specify a required `ChangeFeedMode` value for the mode in which you want to process changes: [latest version](change-feed-modes.md#latest-version-change-feed-mode) or [all versions and deletes](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). Use either `ChangeFeedMode.LatestVersion` or `ChangeFeedMode.AllVersionsAndDeletes` to indicate which mode you want to read change feed in. When using all versions and deletes mode, you must select a change feed start from value of either `Now()` or from a specific continuation token.
You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. When set, this property sets the maximum number of items received per page. If operations in the monitored collection are performed through stored procedures, transaction scope is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. Here's an example for obtaining a `FeedIterator` in latest version mode that returns entity objects, in this case a `User` object: ```csharp
-FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion);
```
-All versions and deletes mode is in preview and can be used with .NET SDK version >= `3.32.0-preview`. Here's an example for obtaining a `FeedIterator` in all versions and deletes mode that returns dynamic objects:
+> [!TIP]
+> Prior to version `3.34.0`, latest version mode can be used by setting `ChangeFeedMode.Incremental`. Both `Incremental` and `LatestVersion` refer to latest version mode of the change feed and applications that use either mode will see the same behavior.
+
+All versions and deletes mode is in preview and can be used with preview .NET SDK versions >= `3.32.0-preview`. Here's an example for obtaining a `FeedIterator` in all versions and deletes mode that returns dynamic objects:
```csharp FeedIterator<dynamic> InteratorWithDynamic = container.GetChangeFeedIterator<dynamic>(ChangeFeedStartFrom.Now(), ChangeFeedMode.AllVersionsAndDeletes);
The `FeedIterator` for both change feed modes comes in two flavors. In addition
Here's an example for obtaining a `FeedIterator` in latest version mode that returns a `Stream`: ```csharp
-FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion);
``` ### Consuming an entire container's changes
FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator(ChangeF
If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time using latest version mode: ```csharp
-FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorForTheEntireContainer = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.LatestVersion);
while (iteratorForTheEntireContainer.HasMoreResults) {
In some cases, you may only want to process a specific partition key's changes.
```csharp FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>(
- ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.Incremental));
+ ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.LatestVersion));
while (iteratorForThePartitionKey.HasMoreResults) {
Here's a sample that shows how to read from the beginning of the container's cha
Machine 1: ```csharp
-FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.LatestVersion);
while (iteratorA.HasMoreResults) { FeedResponse<User> response = await iteratorA.ReadNextAsync();
while (iteratorA.HasMoreResults)
Machine 2: ```csharp
-FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.LatestVersion);
while (iteratorB.HasMoreResults) { FeedResponse<User> response = await iteratorA.ReadNextAsync();
while (iteratorB.HasMoreResults)
You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The continuation token, if specified, takes precedence over the start time and start from beginning values. The following code reads through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed. ```csharp
-FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
+FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.LatestVersion);
string continuation = null;
while (iterator.HasMoreResults)
} // Some time later when I want to check changes again
-FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental);
+FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.LatestVersion);
``` As long as the Azure Cosmos DB container still exists, a FeedIterator's continuation token never expires.
cosmos-db Migrate Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-passwordless.md
+
+ Title: Migrate applications to use passwordless authentication with Azure Cosmos DB for NoSQL
+
+description: Learn to migrate existing applications away from connection strings to instead use Azure AD and Azure RBAC for enhanced security.
+++ Last updated : 04/05/2023+++++
+# Migrate an application to use passwordless connections with Azure Cosmos DB for NoSQL
+
+Application requests to Azure Cosmos DB for NoSQL must be authenticated. Although there are multiple options for authenticating to Azure Cosmos DB, you should prioritize passwordless connections in your applications when possible. Traditional authentication methods that use connection strings with passwords or secret keys create security risks and complications. Visit the [passwordless connections for Azure services](/azure/developer/intro/passwordless-overview) hub to learn more about the advantages of moving to passwordless connections.
+
+The following tutorial explains how to migrate an existing application to connect to Azure Cosmos DB for NoSQL using passwordless connections instead of a key-based solution.
+
+## Configure roles and users for local development authentication
++
+### Sign-in to Azure locally
++
+### Migrate the app code to use passwordless connections
+
+1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package:
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```csharp
+ using Azure.Identity;
+ ```
+
+1. Identify the locations in your code that create a `CosmosClient` object to connect to Azure Cosmos DB. Update your code to match the following example.
+
+ ```csharp
+ using CosmosClient client = new(
+ accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT"),
+ tokenCredential: new DefaultAzureCredential()
+ );
+ ```
+
+### Run the app locally
+
+After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your local dev user in Azure allows your app to connect to the Azure service locally.
+
+## Configure the Azure hosting environment
+
+Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it's deployed to Azure. The sections that follow explain how to configure a deployed application to connect to Azure Cosmos DB using a managed identity.
+
+### Create the managed identity
++
+#### Associate the managed identity with your web app
+
+You need to configure your web app to use the managed identity you created. Assign the identity to your app using either the Azure portal or the Azure CLI.
+
+# [Azure portal](#tab/azure-portal-associate)
+
+Complete the following steps in the Azure portal to associate an identity with your app. These same steps apply to the following Azure
+
+* Azure Spring Apps
+* Azure Container Apps
+* Azure virtual machines
+* Azure Kubernetes Service
+
+1. Navigate to the overview page of your web app.
+1. Select **Identity** from the left navigation.
+1. On the **Identity** page, switch to the **User assigned** tab.
+1. Select **+ Add** to open the **Add user assigned managed identity** flyout.
+1. Select the subscription you used previously to create the identity.
+1. Search for the **MigrationIdentity** by name and select it from the search results.
+1. Select **Add** to associate the identity with your app.
+
+ :::image type="content" source="../../../articles/storage/common/media/create-user-assigned-identity-small.png" alt-text="Screenshot showing how to create a user assigned identity." lightbox="../../../articles/storage/common/media/create-user-assigned-identity.png":::
+
+# [Azure CLI](#tab/azure-cli-associate)
++++
+### Assign roles to the managed identity
+
+Grant permissions to the managed identity by assigning it the custom role you created, just like you did with your local development user.
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the [az cosmosdb show](/cli/azure/cosmosdb) command. You can filter the output properties using the `--query` parameter.
+
+```azurecli
+az cosmosdb show \
+ --resource-group '<resource-group-name>' \
+ --name '<cosmosdb-name>' \
+ --query id
+```
+
+Copy the output ID from the preceding command. You can then assign roles using the [az role assignment](/cli/azure/role/assignment) command of the Azure CLI.
+
+```azurecli
+az role assignment create \
+ --assignee "<your-managed-identity-name>" \
+ --role "PasswordlessReadWrite" \
+ --scope "<cosmosdb-resource-id>"
+```
+
+### Update the application code
+
+You need to configure your application code to look for the specific managed identity you created when it's deployed to Azure. In some scenarios, explicitly setting the managed identity for the app also prevents other environment identities from accidentally being detected and used automatically.
+
+1. On the managed identity overview page, copy the client ID value to your clipboard.
+1. Update the `DefaultAzureCredential` object to specify this managed identity client ID:
+
+ ```csharp
+ // TODO: Update the <managed-identity-client-id> placeholder.
+ var credential = new DefaultAzureCredential(
+ new DefaultAzureCredentialOptions
+ {
+ ManagedIdentityClientId = "<managed-identity-client-id>"
+ });
+ ```
+
+3. Redeploy your code to Azure after making this change in order for the configuration updates to be applied.
+
+### Test the app
+
+After deploying the updated code, browse to your hosted application in the browser. Your app should be able to connect to Cosmos DB successfully. Keep in mind that it may take several minutes for the role assignments to propagate through your Azure environment. Your application is now configured to run both locally and in a production environment without the developers having to manage secrets in the application itself.
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections.
+
+You can read the following resources to explore the concepts discussed in this article in more depth:
+
+* [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md))
+* To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
Computed properties in Azure Cosmos DB have values derived from existing item pr
## Computed property definition
-Computed properties must be at the top level in the item and can't have a nested path. Each computed property definition has two components: a name and a query. The name is the computed property name, and the query defines logic to calculate the property value for each item. Computed properties are scoped to an individual item and therefore can't use values from multiple items or rely on other computed properties.
+Computed properties must be at the top level in the item and can't have a nested path. Each computed property definition has two components: a name and a query. The name is the computed property name, and the query defines logic to calculate the property value for each item. Computed properties are scoped to an individual item and therefore can't use values from multiple items or rely on other computed properties. Every container can have a maximum of 20 computed properties.
Example computed property definition: ```json
The constraints on computed property query definitions are:
- Queries can't use any of the following clauses: WHERE, GROUP BY, ORDER BY, TOP, DISTINCT, OFFSET LIMIT, EXISTS, ALL, and NONE. -- Aggregate and spatial functions aren't supported.- - Queries can't include a scalar subquery.
+- Aggregate functions, spatial functions, non-deterministic functions and user defined functions aren't supported.
+ ## Creating computed properties During the preview, computed properties must be created using the .NET v3 SDK. Once the computed properties have been created, you can execute queries that reference them using any method including all SDKs and Data Explorer in the Azure portal. |**SDK** |**Supported version** |**Notes** | |--|-|-|
-|.NET SDK v3 |>= 3.34.0-preview |Computed properties are currently only available in preview package versions. |
+|.NET SDK v3 |>= [3.34.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.34.0-preview) |Computed properties are currently only available in preview package versions. |
### Create computed properties using the SDK
Here's an example of how to create computed properties in a new container using
```csharp ContainerProperties containerProperties = new ContainerProperties("myContainer", "/pk") {
- ComputedProperties = new Collection<ComputedProperty
+ ComputedProperties = new Collection<ComputedProperty>
{ new ComputedProperty {
Here's an example of how to create computed properties in a new container using
} };
- ContainerResponse response = await this.database.CreateContainerAsync(containerProperties);
+ Container container = await client.GetDatabase("myDatabase").CreateContainerAsync(containerProperties);
+```
+
+Here's an example of how to update computed properties on an existing container using the .NET SDK:
+
+```csharp
+ var container = client.GetDatabase("myDatabase").GetContainer("myContainer");
+
+ // Read the current container properties
+ var containerProperties = await container.ReadContainerAsync();
+ // Make the necessary updates to the container properties
+ containerProperties.Resource.ComputedProperties = new Collection<ComputedProperty>
+ {
+ new ComputedProperty
+ {
+ Name = "cp_lowerName",
+ Query = "SELECT VALUE LOWER(c.name) FROM c"
+ },
+ new ComputedProperty
+ {
+ Name = "cp_upperName",
+ Query = "SELECT VALUE UPPER(c.name) FROM c"
+ }
+ };
+ // Update container with changes
+ await container.ReplaceContainerAsync(containerProperties);
```
+> [!TIP]
+> Every time you update container properties, the old values are overwritten.
+> If you have existing computed properties and want to add new ones, ensure you add both new and existing computed properties to the collection.
+ ## Using computed properties in queries
-Computed properties can be referenced in queries the same way as persisted properties. Values for computed properties that aren't indexed are evaluated during runtime using the computed property definition. If a computed property is indexed, the index is used in the same way as it is for persisted properties, and the computed property is evaluated on an as needed basis.
+Computed properties can be referenced in queries the same way as persisted properties. Values for computed properties that aren't indexed are evaluated during runtime using the computed property definition. If a computed property is indexed, the index is used in the same way as it is for persisted properties, and the computed property is evaluated on an as needed basis. It's recommended you [add indexes on your computed properties](#indexing-computed-properties) for the best cost and performance.
These examples use the quickstart products dataset in [Data Explorer](../../data-explorer.md). Launch the quick start to get started and load the dataset in a new container.
Add a composite index on two properties where one is computed, `cp_myComputedPro
"compositeIndexes": [ [ {
- "path":"/cp_myComputedProperty",
+ "path":"/cp_myComputedProperty"
}, {
- "path":"/path/to/myPersistedProperty",
+ "path":"/path/to/myPersistedProperty"
} ] ]
cosmos-db Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-troubleshoot-read-only.md
Previously updated : 08/03/2021 Last updated : 05/24/2023 # Troubleshoot read-only access to Azure Cosmos DB for PostgreSQL
almost full. Preventing writes stops the disk from continuing to fill, and
keeps the node available for reads. During the read-only state, you can take measures to free more disk space.
-Specifically, a node becomes read-only when it has less than
-5 GiB of free storage left. When the server becomes read-only, all existing
-sessions are disconnected, and uncommitted transactions are rolled back. Any
-write operations and transaction commits will fail, while read queries will
-continue to work.
+> [!IMPORTANT]
+>
+> Even in the RO state in-flight transactions may continue writing to the database
+> thus decreasing further available storage. If available storage size continues
+> to decrease after the node was set to read-only, all existing sessions
+> would be disconnected and uncommitted transactions would be rolled back.
## Ways to recover write-access
+If a node was set to RO state, you need to free some disk space to unblock writes on the node. Write access is re-enabled when node has more than 16 GiB of free storage on nodes with 256 GiB or greater and more than 7% of free storage on nodes with 128 GiB or smaller storage.
+ ### On the coordinator node * [Increase storage
cosmos-db Self Serve Minimum Tls Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/self-serve-minimum-tls-enforcement.md
This article discusses how to enforce a minimum version of the TLS protocol for
Because of the multi-tenant nature of Cosmos DB, the service is required to meet the access and security needs of every user. To achieve this, **Cosmos DB enforces minimum TLS protocols at the application layer**, and not lower layers in the network stack where TLS operates. This enforcement occurs on any authenticated request to a specific database account, according to the settings set on that account by the customer.
-The **minimum service-wide accepted version is TLS 1.0**. This can be changed on a per account basis, as discussed in the following section.
+The **minimum service-wide accepted version is TLS 1.0**. This selection can be changed on a per account basis, as discussed in the following section.
## How to set the minimum TLS version for my Cosmos DB database account
The **default value for new and existing accounts is `Tls`**.
> [!IMPORTANT] > Staring on April 1st, 2023, the **default value for new accounts will be switched to `Tls12`**.
+### Set Minimal TLS Protocol in Azure Cosmos DB using the Portal
+
+This self-serve feature is available in the Portal while creating and editing an account. Azure Cosmos DB Accounts enforce the TLS 1.2 protocol. However, Azure Cosmos DB also supports the following TLS protocols depending on the API kind selected.
+
+- **MongoDB:** TLS 1.2
+
+- **Cassandra:** TLS 1.2
+
+- **Table, SQL and Graph:** TLS 1.0, TLS 1.1 and TLS 1.2
+
+
+
+### Steps to set Minimal TLS Protocol while creating an account
+
+If you're using an API Kind that only supports TLS 1.2, you'll notice in the Networking tab at the bottom the TLS protocol disabled.
++++
+If you're using an API Kind that accepts multiple TLS protocols, then you can navigate to the Networking tab and the Minimum Transport Layer Security Protocol option is available. You can change the selected protocol by just clicking on the dropdown and selecting the desired protocol.
+++
+After setting up your account, you can review in the Review + create tab, at the bottom inside the Networking section, that the selected TLS Protocol is set as you specified.
+++
+### Steps to set the Minimal TLS Protocol while editing an account
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+2. Select Networking from the left menu, then select the Connectivity tab.
+
+3. You'll find the Minimum Transport Layer Security Protocol option. If you're using an API Kind that only supports TLS 1.2, you'll notice this option disabled. Otherwise, you'll be able to select the desired TLS Protocol by just clicking on it.
++
+ :::image type="content" source="media/self-serve-minimum-tls-enforcement/edit.png" alt-text="Screenshot of minimum transport layer security protocol option.":::
+
+
+4. Click Save once you changed the TLS protocol.
+
+ :::image type="content" source="media/self-serve-minimum-tls-enforcement/save.png" alt-text="Screenshot of save after change.":::
+
+
+5. Once it is saved, you'll receive a success notification. Still, this change can take up to 15 minutes to take effect after the configuration update is completed.
+
+ :::image type="content" source="media/self-serve-minimum-tls-enforcement/notification-success.png" alt-text="Screenshot of success notification.":::
+
+
+ ### Set via Azure CLI
-To set using Azure CLI, use the command below:
+To set using Azure CLI, use the command:
```azurecli-interactive subId=$(az account show --query id -o tsv)
az rest --uri "/subscriptions/$subId/resourceGroups/$rg/providers/Microsoft.Docu
### Set via Azure PowerShell
-To set using Azure PowerShell, use the command below:
+To set using Azure PowerShell, use the command:
```azurepowershell-interactive $minimalTlsVersion = 'Tls12'
Invoke-AzRestMethod @patchParameters
### Set via ARM template
-To set this property using an ARM template, update your existing template or export a new template for your current deployment, then add `"minimalTlsVersion"` to the properties for the `databaseAccounts` resources, with the desired minimum TLS version value. Below is a basic example of an Azure Resource Manager template with this property setting, using a parameter.
+To set this property using an ARM template, update your existing template or export a new template for your current deployment, then add `"minimalTlsVersion"` to the properties for the `databaseAccounts` resources, with the desired minimum TLS version value. Provided here is a basic example of an Azure Resource Manager template with this property setting, using a parameter.
```json {
You can also get the current value of the `minimalTlsVersion` property by using
### Get current value via Azure CLI
-To get the current value of the property using Azure CLI, run the command below:
+To get the current value of the property using Azure CLI, run the command:
```azurecli-interactive subId=$(az account show --query id -o tsv)
az rest --uri "/subscriptions/$subId/resourceGroups/$rg/providers/Microsoft.Docu
### Get current value via Azure PowerShell
-To get the current value of the property using Azure PowerShell, run the command below:
+To get the current value of the property using Azure PowerShell, run the command:
```azurepowershell-interactive $getParameters = @{
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in. Previously updated : 07/11/2022 Last updated : 05/25/2023
We encourage you to try out the preview features available in Cost Management La
## Remember preview features across sessions
-Cost Management now remembers preview features across sessions in the preview portal. Select the preview features you're interested in from the Try preview menu and you'll see them enabled by default the next time you visit the portal. No need to enable this option ΓÇô preview features will be remembered automatically.
+Cost Management now remembers preview features across sessions in the preview portal. Select the preview features you're interested in from the **Try preview** menu and you'll see them enabled by default the next time you visit the portal. There's no need to enable the option ΓÇô preview features are remembered automatically.
<a name="totalkpitooltip"></a> ## Total KPI tooltip
-View additional details about what costs are included and not included in the Cost analysis preview. You can enable this option from the Try Preview menu.
+View more details about what costs are included and not included in the Cost analysis preview. You can enable this option from the Try Preview menu.
The Total KPI tooltip can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) menu in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
For more information about anomaly detection and how to configure alerts, see [I
## Recent and pinned views in the cost analysis preview
-Cost analysis is your tool for interactive analytics and insights. You've seen the addition of new views and capabilities, like anomaly detection, in the cost analysis preview, but classic cost analysis is still the best tool for quick data exploration with simple filtering and grouping. While these capabilities are coming to the preview, we're introducing a new experience that allows you to select which view you want to start with, whether that be a preview view, a built-in view, or a custom view you created.
+Cost analysis is your tool for interactive analytics and insights. You've seen the addition of new views and capabilities, like anomaly detection, in the cost analysis preview. However, classic cost analysis is still the best tool for quick data exploration with simple filtering and grouping. While these capabilities are coming to the preview, we're introducing a new experience that allows you to select which view you want to start with. Whether that is a preview view, a built-in view, or a custom view you created.
-The first time you open the cost analysis preview, you'll see a list of all views. When you return, you'll see a list of the recently used views to help you get back to where you left off quicker than ever. You can pin any view or even rename or subscribe to alerts for your saved views.
+The first time you open the cost analysis preview, you see a list of all views. When you return, you see a list of the recently used views to help you get back to where you left off quicker than ever. You can pin any view or even rename or subscribe to alerts for your saved views.
**Recent and pinned views are available by default in the cost analysis preview.** Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback. - <a name="aksnestedtable"></a> ## Grouping SQL databases and elastic pools
Understanding what you're being charged for can be complicated. The best place t
Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
-SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to manually sum up the cost of the server and each individual database. As an example, you can see the **aepool** elastic pool at the top of the list below and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
+SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to manually sum up the cost of the server and each individual database. As an example, you can see the **aepool** elastic pool at the top of the following list and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
Here's an example showing classic cost analysis where multiple related resource costs aren't grouped.
Here's an example showing grouped resource costs with the **Grouping SQL databas
You might also notice the change in row count. Classic cost analysis shows 53 rows where every resource is broken out on its own. The cost analysis preview only shows 25 rows. The difference is that the individual resources are being grouped together, making it easier to get an at-a-glance cost summary.
-In addition to SQL servers, you'll also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the cost analysis preview.
+In addition to SQL servers, you also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the cost analysis preview.
**Grouping SQL databases and elastic pools is available by default in the cost analysis preview.**
In addition to SQL servers, you'll also see other services with child resources,
## Group related resources in the cost analysis preview
-Group related resources, like disks under VMs or web apps under App Service plans, by adding a ΓÇ£cm-resource-parentΓÇ¥ tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources will be grouped. Leave feedback to let us know how we can improve this experience further for you.
+Group related resources, like disks under VMs or web apps under App Service plans, by adding a ΓÇ£cm-resource-parentΓÇ¥ tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources are grouped. Leave feedback to let us know how we can improve this experience further for you.
-Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and can't group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You'll see a single row with the parent resource. When you expand the parent resource, you'll see each linked resource listed individually with their respective cost.
+Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and can't group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You see a single row with the parent resource. When you expand the parent resource, you see each linked resource listed individually with their respective cost.
As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This example gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
The view cost link is enabled by default in the [Azure preview portal](https://p
Cost Management includes a central management screen for all configuration settings. Some of the settings are also available directly from the Cost Management menu currently. Enabling the **Streamlined menu** option removes configuration settings from the menu.
-In the following image, the menu on the left is classic cost analysis. The menu on the right is the streamlined menu.
+In the following image, the left menu is classic cost analysis. The right menu is the streamlined menu.
:::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-streamlined-menu.png" alt-text="Screenshot showing the Streamlined menu in cost analysis preview." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-streamlined-menu.png" :::
It allows changing the scope from the menu for quicker navigation. To enable the
[Share your feedback](https://feedback.azure.com/d365community/idea/e702a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+## Reservation utilization alerts
+
+[Azure reservations](../reservations/save-compute-costs-reservations.md) can provide cost savings by committing to one-year or three-year plans. However, reservations can sometimes go unutilized or underutilized, resulting in financial losses. As a [billing account](../reservations/reservation-utilization.md#view-utilization-as-billing-administrator) or [reservation user](../reservations/reservation-utilization.md#view-utilization-in-the-azure-portal-with-azure-rbac-access), you can [review the utilization percentage](../reservations/reservation-utilization.md) of your reservation purchases in the Azure portal, but you might miss out important changes. By enabling reservation utilization alerts, you solve this by receiving email notifications whenever any of your reservations exhibit low utilization. This allows you to take prompt action and optimize your reservation purchases for maximum efficiency.
+
+The alert email provides essential information including top unutilized reservations and a hyperlink to the list of reservations. By promptly optimizing your reservation purchases, you can avoid financial losses and ensure that your investments are delivering the expected cost savings. For more information, see [Reservation utilization alerts](reservation-utilization-alerts.md).
++ ## How to share feedback
We're always listening and making constant improvements based on your feedback,
- If you have a problem or are seeing data that doesn't make sense, submit a support request. It's the fastest way to investigate and resolve data issues and major bugs. - For feature requests, you can share ideas and vote up others in the [Cost Management feedback forum](https://aka.ms/costmgmt/feedback).-- Take advantage of the **How would you rate…** prompts in the Azure portal to let us know how each experience is working for you. We monitor the feedback proactively to identify and prioritize changes. You'll see either a blue option in the bottom-right corner of the page or a banner at the top.
+- Take advantage of the **How would you rate…** prompts in the Azure portal to let us know how each experience is working for you. We monitor the feedback proactively to identify and prioritize changes. You see either a blue option in the bottom-right corner of the page or a banner at the top.
## Next steps
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
Title: Cancel your Azure subscription description: Describes how to cancel your Azure subscription, like the Free Trial subscription -+ tags: billing Previously updated : 04/11/2023 Last updated : 05/25/2023
Although not required, Microsoft *recommends* that you take the following action
If you cancel an Azure Support plan, you're billed for the rest of the month. Cancelling a support plan doesn't result in a prorated refund. For more information, see [Azure support plans](https://azure.microsoft.com/support/plans/).
+Instead of canceling a subscription, you can remove all of its resources to [prevent unwanted charges](#prevent-unwanted-charges).
+ ## Who can cancel a subscription? The following table describes the permission required to cancel a subscription.
Depending on your subscription type, you may not be able to delete a subscriptio
> - The subscription is automatically deleted 90 days after you cancel a subscription. > - If you have deleted all resources but the Delete your subscription page shows that you still have active resources, you might have active *hidden resources*. You can't delete a subscription if you have active hidden resources. To delete them, navigate to **Subscriptions** > select the subscription > **Resources**. At the top of the page, select **Manage view** and then select **Show hidden types**. Then, delete the resources.
+## Prevent unwanted charges
+
+To prevent unwanted charges on a subscription, you can go to **Resources** menu for the subscription and select the resources that you want to delete. If don't want to have any charges for the subscription, select all of the subscription resources and then **Delete** them. The subscription essentially becomes an empty container with no charges.
++
+If you have a support plan, you might continue to get charged for it. To delete a support a plan, navigate to **Cost Management + Billing** and select **Recurring charges**. Select the support plan and turn off autorenewal.
## Reactivate a subscription
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
These roles are specific to managing Azure Enterprise Agreements and are in addi
> > This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-## Azure Enterprise portal hierarchy
+## Azure portal for Cost Management and Billing
-The Azure Enterprise portal hierarchy consists of:
+The Azure portal hierarchy for Cost Management consists of:
-- **Azure Enterprise portal** - an online management portal that helps you manage costs for your Azure EA services. You can:
+- **Azure portal for Cost Management** - an online management portal that helps you manage costs for your Azure EA services. You can:
- Create an Azure EA hierarchy with departments, accounts, and subscriptions. - Reconcile the costs of your consumed services, download usage reports, and view price lists.
The Azure Enterprise portal hierarchy consists of:
- **Departments** help you segment costs into logical groupings. Departments enable you to set a budget or quota at the department level. -- **Accounts** are organizational units in the Azure Enterprise portal. You can use accounts to manage subscriptions and access reports.
+- **Accounts** are organizational units in the Azure portal for Cost Management. You can use accounts to manage subscriptions and access reports.
-- **Subscriptions** are the smallest unit in the Azure Enterprise portal. They're containers for Azure services managed by the service administrator.
+- **Subscriptions** are the smallest unit in the Azure portal for Cost Management. They're containers for Azure services managed by the Account Owner role, also known as the Subscription's service administrator.
The following diagram illustrates simple Azure EA hierarchies.
The following administrative user roles are part of your enterprise enrollment:
- Service administrator - Notification contact
-Roles work in two different portals to complete tasks. You use the [Azure Enterprise portal](https://ea.azure.com) to manage billing and costs, and the [Azure portal](https://portal.azure.com) to manage Azure services.
+Use the Azure portal's Cost Management blade the [Azure portal](https://portal.azure.com) to manage Azure Enterprise Agreement roles.
Direct EA customers can complete all administrative tasks in the Azure portal. You can use the [Azure Portal](https://portal.azure.com) to manage billing, costs, and Azure services.
-User roles are associated with a user account. To validate user authenticity, each user must have a valid work, school, or Microsoft account. Ensure that each account is associated with an email address that's actively monitored. Account notifications are sent to the email address.
+User roles are associated with a user account. To validate user authenticity, each user must have a valid work, school, or Microsoft account. Ensure that each account is associated with an email address that's actively monitored. Enrollment notifications are sent to the email address.
-When setting up users, you can assign multiple accounts to the enterprise administrator role. However, only one account can hold the account owner role. Also, you can assign both the enterprise administrator and account owner roles to a single account.
+> [!NOTE]
+> The Account Owner role is often assigned to a service account that doesn't have an actively monitored email.
+
+When setting up users, you can assign multiple accounts to the enterprise administrator role. An enrollment can have multiple account owners, for example, one per department. Also, you can assign both the enterprise administrator and account owner roles to a single account.
### Enterprise administrator
-Users with this role have the highest level of access. They can:
+Users with this role have the highest level of access to the Enrollment. They can:
- Manage accounts and account owners. - Manage other enterprise administrators.
Users with this role have the highest level of access. They can:
You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators.
-The EA administrator role automatically inherits all access and privilege of the department administrator role. So thereΓÇÖs no need to manually give an EA administrator the department administrator role. Avoid giving the EA administrator the department administrator role because, as a department administrator, the EA administrator:
--- Won't have access to the Enrollment tab in the EA portal-- Won't have access to the Usage Summary Page under the Reports tab-
+The EA administrator role automatically inherits all access and privilege of the department administrator role. So thereΓÇÖs no need to manually give an EA administrator the department administrator role.
The enterprise administrator role can be assigned to multiple accounts.
Enterprise administrators have the most privileges when managing an Azure EA enr
## Update account owner state from pending to active
-When new Account Owners (AO) are added to an Azure EA enrollment for the first time, their status appears as _pending_. When a new account owner receives the activation welcome email, they can sign in to activate their account. Once they activate their account, the account status is updated from _pending_ to _active_. The account owner needs to read the 'Warning' message and select **Continue**. New users might get prompted enter their first and last name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
+When new Account Owners (AO) are added to an Azure EA enrollment for the first time, their status appears as _pending_. When a new account owner receives the activation welcome email, they can sign in to activate their account.
+
+> [!NOTE]
+> If the Account Owner is a service account and doesn't have an email, use an In-Private session to log into the Azure portal and navigate to Cost Management to be prompted to accept the activation welcome email.
+
+Once they activate their account, the account status is updated from _pending_ to _active_. The account owner needs to read the 'Warning' message and select **Continue**. New users might get prompted to enter their first and last name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
+
+> [!NOTE]
+> A subscription is associated with one and only one account. The warning message includes details that warn the Account Owner that accepting the offer will move the subscriptions associated with the Account to the new Enrollment.
## Add a department Admin
Direct EA admins can add department admins in the Azure portal. For more informa
## See pricing for different user roles
-You may see different pricing in the Azure portal depending on your administrative role and how the view charges policies are set by the Enterprise Administrator. The two policies in the Enterprise portal that affect the pricing you see in the Azure portal are:
--- DA view charges-- AO view charges
+You may see different pricing in the Azure portal depending on your administrative role and how the view charges policies are set by the Enterprise Administrator. Enabling Department Administrator and Account Owner Roles to see the charges can be restricted by restricting access to billing information.
To learn how to set these policies, see [Manage access to billing information for Azure](manage-billing-access.md).
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 04/18/2023 Last updated : 05/16/2023
Before you configure clustering on your device, you must cable the devices as pe
![Figure showing the steps in the deployment of a two-node Azure Stack Edge](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-clustering-deployment-1.png) 1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
-1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
+1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-gpu-deploy-install.md?pivots=twonode#cable-the-device).
1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md). 1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Ensure that port settings match between same port name on each appliance. Get the authentication token on this node. 1. Use the authentication token from the prepared node and join this node to the first node to form a cluster.
databox-online Azure Stack Edge Gpu Configure Tls Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-configure-tls-settings.md
Previously updated : 02/22/2021 Last updated : 05/24/2023
The guidelines provided here are based on testing performed on a client running
## Configure TLS 1.2 for current PowerShell session
-Do the following steps to configure TLS 1.2 on your client.
+Use the following steps to configure TLS 1.2 on your client.
1. Run PowerShell as administrator. 2. To set TLS 1.2 for the current PowerShell session, type: ```azurepowershell
- $TLS12Protocol = [System.Net.SecurityProtocolType] 'Ssl3 , Tls12'
- [System.Net.ServicePointManager]::SecurityProtocol = $TLS12Protocol
+ [System.Net.ServicePointManager]::SecurityProtocol = 'TLS12'
```+ ## Configure TLS 1.2 on client If you want to set system-wide TLS 1.2 for your environment, follow the guidelines in these documents:
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-agentless-containers-posture.md#registries-and-images). - **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-agentless-containers-posture.md#registries-and-images). - **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [connect privately to an Azure container registry using Azure Private Link](/azure/container-registry/container-registry-private-link#set-up-private-endpointportal-recommended). -- **Gaining intel for existing exploits of a vulnerability** - While vulnerability reporting tools can report the ever growing volume of vulnerabilities, the capacity to efficiently remediate them remains a challenge;teams. These tools typically prioritize their remediation processes according to the severity of the vulnerability. MDVM provides additional context on the risk related with each vulnerability, leveraging intelligent assessment and risk-based prioritization against industry security benchmarks, based on three data sources: [exploit DB](https://www.exploit-db.com/), [CISA KEV](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), and [MSRC](https://www.microsoft.com/msrc?SilentAuth=1&wa=wsignin1.0)
+- **Gaining intel for existing exploits of a vulnerability** - While vulnerability reporting tools can report the ever growing volume of vulnerabilities, the capacity to efficiently remediate them remains a challenge. These tools typically prioritize their remediation processes according to the severity of the vulnerability. MDVM provides additional context on the risk related with each vulnerability, leveraging intelligent assessment and risk-based prioritization against industry security benchmarks, based on three data sources: [exploit DB](https://www.exploit-db.com/), [CISA KEV](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), and [MSRC](https://www.microsoft.com/msrc?SilentAuth=1&wa=wsignin1.0)
- **Reporting** - Defender for Containers powered by Microsoft Defender Vulnerability Management (MDVM) reports the vulnerabilities as the following recommendation: | Recommendation | Description |
Container registry vulnerability assessment scans container images stored in you
1. Defender CSPM automatically discovers all containers registries, repositories and images (created before or after enabling the plan). 1. Once a day, all discovered images are pulled and an inventory is created for each image that is discovered. 1. Vulnerability reports for known vulnerabilities (CVEs) are generated for each software that is present on an image inventory.
-6. Vulnerability reports are refreshed daily for any image pushed during the last 90 days to a registry or currently running on a Kubernetes cluster monitored by Defender CSPM Agentless discovery and visibility for Kubernetes, or monitored by the Defender for Containers agent (profile or extension).
+1. Vulnerability reports are refreshed daily for any image pushed during the last 90 days to a registry or currently running on a Kubernetes cluster monitored by Defender CSPM Agentless discovery and visibility for Kubernetes, or monitored by the Defender for Containers agent (profile or extension).
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Learn how to [build queries with cloud security explorer](how-to-manage-cloud-se
We're announcing the release of Vulnerability Assessment for Linux images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM. This release includes daily scanning of images. Findings used in the Security Explorer and attack paths rely on MDVM Vulnerability Assessment instead of the Qualys scanner.
-The existing recommendation "Container registry images should have vulnerability findings resolved" is replaced by a new recommendation powered by MDVM:
+The existing recommendation `Container registry images should have vulnerability findings resolved` is replaced by a new recommendation powered by MDVM:
|Recommendation | Description | Assessment Key| |--|--|--|
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsof
|Hardware profile |Appliance |SPAN/TAP throughput |Physical specifications | |||||
-|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: Up to 3 Gbps <br>**Max devices**: 12K <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 4 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 8 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: Up to 3 Gbps <br>**Max devices**: 12K <br> 16C[32T] CPU/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 4C[8T] CPU/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8C[8T] CPU/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 4C[8T] CPU/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4C[4T] CPU/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
> [!NOTE] > The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store. <br><br>
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Title: Onboard and activate a virtual OT sensor - Microsoft Defender for IoT. description: This tutorial describes how to set up a virtual OT network sensor to monitor your OT network traffic. Previously updated : 05/03/2023 Last updated : 05/23/2023 # Tutorial: Onboard and activate a virtual OT sensor
This procedure describes how to install the sensor software on your VM.
For more information, see [Configure proxy settings on an OT sensor](connect-sensors.md).
-1. <a name=credentials></a>The installation process starts running and then shows the credentials screen.
+1. <a name="credentials"></a>The installation process starts running and then shows the credentials screen.
Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are shown. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
If you don't have an existing key vault, use the following steps to create one:
Leave the other options at their defaults.
-1. On the Access policy tab, select **Azure role-based access control**, and then select **Review + create**.
+1. On the Access configuration tab, select **Azure role-based access control**, and then select **Review + create**.
1. On the Review + create tab, select **Create**.
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Last updated 05/19/2023
# Configure and use a remote desktop gateway in Azure DevTest Labs
-This article describes how to set up and use a gateway for secure remote desktop access to lab virtual machines (VMs) in Azure DevTest Labs. Using a gateway improves security because you don't expose the VMs' remote desktop protocol (RDP) ports to the internet. This remote desktop gateway solution also supports token authentication.
+This article describes how to set up and use a gateway for secure [remote desktop](/windows-server/remote/remote-desktop-services/Welcome-to-rds) access to lab virtual machines (VMs) in Azure DevTest Labs. Using a gateway improves security because you don't expose the VMs' remote desktop protocol (RDP) ports to the internet. This remote desktop gateway solution also supports token authentication.
DevTest Labs provides a central place for lab users to view and connect to their VMs. Selecting **Connect** > **RDP** on a lab VM's **Overview** page creates a machine-specific RDP file, and users can open the file to connect to the VM.
Follow these steps to set up a sample remote desktop gateway farm.
|`adminUsername` |**Required** |Administrator user name for the gateway machines. | |`adminPassword` |**Required** |Password for the administrator account for the gateway machines. | |`instanceCount` | |Number of gateway machines to create. |
- |`alwaysOn` | |Whether to keep the created Azure Functions app in a warm state or not. Keeping the Azure Functions app on avoids delays when users first try to connect to their lab VMs, but has cost implications. |
- |`tokenLifetime` | |The length of time in HH:MM:SS format that the created token will be valid. |
+ |`alwaysOn` | |Whether to keep the created Azure Functions app warmed (on) or not. Keeping the app on avoids delays when users first try to connect to their lab VMs, but has cost implications. |
+ |`tokenLifetime` | |The length of time in HH:MM:SS format that the created token is valid. |
|`sslCertificate` |**Required** |The Base64 encoding of the TLS/SSL certificate for the gateway machine. | |`sslCertificatePassword` |**Required** |The password of the TLS/SSL certificate for the gateway machine. | |`sslCertificateThumbprint` |**Required** |The certificate thumbprint for identification in the local certificate store of the signing certificate. |
Follow these steps to set up a sample remote desktop gateway farm.
|`_artifactsLocation` |**Required** |The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi*, that supports token authentication. | |`_artifactsLocationSasToken`|**Required** |The shared access signature (SAS) token to access artifacts, if the `_artifactsLocation` is an Azure storage account. |
-1. Deploy *azuredeploy.json* by using the following Azure CLI command:
+1. Run the following Azure CLI command to deploy *azuredeploy.json*:
```azurecli az deployment group create --resource-group {resource-group} --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -ΓÇôparameters _artifactsLocation="{storage-account-endpoint}/{container-name}" -ΓÇôparameters _artifactsLocationSasToken = "?{sas-token}"
Follow these steps to set up a sample remote desktop gateway farm.
1. Configure DNS so that the FQDN of the TLS/SSL certificate directs to the `gatewayIP` IP address.
-After you create the remote desktop gateway farm and update DNS, you can configure Azure DevTest Labs to use the gateway.
+After you create the remote desktop gateway farm and update DNS, configure Azure DevTest Labs to use the gateway.
## Configure the lab to use token authentication
-Before you update the lab settings, store the key for the authentication token function in the lab's key vault. You can get the function key value on the function's **Function Keys** page in the Azure portal.
-
-To find the ID of the lab's key vault, run the following Azure CLI command:
+Before you update lab settings, store the key for the authentication token function in the lab's key vault. You can get the function key value on the function's **Function Keys** page in Azure portal. To find the ID of the lab's key vault, run the following Azure CLI command:
```azurecli az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName ```
-For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Record the secret name to use later. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
+Learn how to save a secret in a key vault in the article, [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Record the secret name to use later. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
To configure a lab's **Gateway hostname** and **Gateway token secret** to use token authentication with the gateway machine(s), follow these steps:
To configure a lab's **Gateway hostname** and **Gateway token secret** to use to
Once you configure both the gateway and the lab, the RDP connection file created when the lab user selects **Connect** includes the necessary information to connect to the gateway and use token authentication.
-### Configure a lab via automation
+### Automate lab configuration
-- [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
+- Powershell: [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
-- The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) has [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) that create or update a lab with **Gateway hostname** and **Gateway token secret** settings.
+- ARM: Use the [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) in the Azure DevTest Labs GitHub repository to create or update labs with **Gateway hostname** and **Gateway token secret** settings.
### Configure a network security group
-To further secure the lab, you can add a network security group (NSG) to the virtual network the lab VMs use. For instructions, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
-
-For example, an NSG could allow only traffic that first goes through the gateway to reach lab VMs. The rule source is the IP address of the gateway machine or load balancer for the gateway farm.
+To further secure the lab, add a network security group (NSG) to the virtual network the lab VMs use as described in [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). For example, an NSG could allow only traffic that first goes through the gateway to reach lab VMs. The rule source is the IP address of the gateway machine or load balancer for the gateway farm.
![Screenshot of a Network security group rule.](./media/configure-lab-remote-desktop-gateway/network-security-group-rules.png) ## Next steps -- [Remote Desktop Services documentation](/windows-server/remote/remote-desktop-services/Welcome-to-rds) - [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure)-- [System Center documentation](/system-center/)
event-grid Mqtt Event Grid Namespace Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-event-grid-namespace-terminology.md
Key terms relevant for Event Grid namespace and MQTT resources are explained.
## Namespace
-An Event Grid namespace is a declarative space that provides a scope to all the nested resources or subresources such as topics, certificates, clients, client groups, topic spaces, permission bindings. It gives you a unique FQDN.
+An Event Grid namespace is a declarative space that provides a scope to all the nested resources or subresources such as topics, certificates, clients, client groups, topic spaces, permission bindings.
-Namespace is a tracked resource with 'tags' and a 'location' properties, and once created can be found on resources.azure.com.
+| Resource | Protocol supported |
+| : | :: |
+| Namespace topics | HTTP |
+| Topic Spaces | MQTT |
+| Clients | MQTT |
+| Client Groups | MQTT |
+| CA Certificates | MQTT |
+| Permission bindings | MQTT |
Using the namespace, you can organize the subresources into logical groups and manage them as a single unit in your Azure subscription. Deleting a namespace deletes all the subresources encompassed within the namespace.
+It gives you a unique FQDN. A Namespace exposes two endpoints:
+
+- An HTTP endpoint to support general messaging requirements using Namespace Topics.
+- An MQTT endpoint for IoT messaging or solutions that use MQTT.
+
+A Namespace also provides DNS-integrated network endpoints and a range of access control and network integration management features such as IP ingress filtering and private links. It's also the container of managed identities used for all contained resources that use them.
+
+Namespace is a tracked resource with 'tags' and a 'location' properties, and once created can be found on resources.azure.com.
+ The name of the namespace can be 3-50 characters long. It can include alphanumeric, and hyphen(-), and no spaces. The name needs to be unique per region. ## Client
-Client is a device or an application that can publish and/or subscribe MQTT messages.
+Client is a device or an application that can publish and/or subscribe MQTT messages. For more information about client configuration, see [MQTT clients](mqtt-clients.md).
## Certificate / Cert
-Certificate is a form of asymmetric credential. They're a combination of a public key from an asymmetric keypair and a set of metadata describing the valid uses of the keypair. If the keypair of the issuer is the same keypair as the certificate, the certificate is said to be "self-signed". Third-party certificate issuers are sometimes called Certificate Authorities (CA).
+Certificate is a form of asymmetric credential. They're a combination of a public key from an asymmetric keypair and a set of metadata describing the valid uses of the keypair. If the keypair of the issuer is the same keypair as the certificate, the certificate is said to be "self-signed". Third-party certificate issuers are sometimes called Certificate Authorities (CA). For more information about client authentication, see [MQTT client authentication](mqtt-client-authentication.md).
## Client attributes
-Client attributes represent a set of key-value pairs that provide descriptive information about the client. Client attributes are used in creating client groups and as variables in Topic Templates. For example, client type is an attribute that provides the client's type.
+Client attributes represent a set of key-value pairs that provide descriptive information about the client. Client attributes are used in creating client groups and as variables in Topic Templates. For example, client type is an attribute that provides the client's type. For more information about client configuration, see [MQTT clients](mqtt-clients.md).
## Client group
-Client group is a collection of clients. Clients can be grouped together using common client attribute(s). Client groups can be given permissions to publish and/or subscribe to a specific topic space.
+Client group is a collection of clients. Clients can be grouped together using common client attribute(s). Client groups can be given permissions to publish and/or subscribe to a specific topic space. For more information about client groups configuration, see [MQTT client groups](mqtt-client-groups.md).
## Topic space
-Topic space is a set of topic templates. It's used to simplify access control management by enabling you to grant publish or subscribe access to a group of topics at once instead of individual topics.
+Topic space is a set of topic templates. It's used to simplify access control management by enabling you to grant publish or subscribe access to a group of topics at once instead of individual topics. For more information about topic spaces configuration, see [MQTT topic spaces](mqtt-topic-spaces.md).
## Topic filter
An MQTT topic filter is an MQTT topic that can include wildcards for one or more
## Topic template
-Topic templates are an extension of the topic filter that supports variables. It's used for fine-grained access control within a client group.
+Topic templates are an extension of the topic filter that supports variables. It's used for fine-grained access control within a client group.
## Permission bindings
-A Permission Binding grants access to a specific client group to either publish or subscribe on a specific topic space.
+A Permission Binding grants access to a specific client group to either publish or subscribe on a specific topic space. For more information about permission bindings, see [MQTT access control](mqtt-access-control.md).
## Throughput units
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
Azure Event Grid enables your MQTT clients to communicate with each other and wi
You can find code samples that demonstrate these scenarios in [this repository.](https://github.com/Azure-Samples/MqttApplicationSamples)
+The MQTT support in Event Grid is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
+ :::image type="content" source="media/overview/mqtt-messaging-high-res.png" alt-text="High-level diagram of Event Grid that shows bidirectional MQTT communication with publisher and subscriber clients." border="false"::: > [!NOTE]
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
Save the Namespace object in namespace.json file in resources folder.
"properties": { "inputSchema": "CloudEventSchemaV1_0", "topicSpacesConfiguration": {
- "state": "Enabled",
- }
+ "state": "Enabled"
+ },
+ "isZoneRedundant": true
}, "location": "{Add region name}" }
Store the client object in client1.json file. Update the allowedThumbprints fie
```json {
- "properties": {
- "state": "Enabled",
- "authenticationName": ΓÇ£client1-authnID",
- "clientCertificateAuthentication": {
- "allowedThumbprints": [
- "{Your client 1 certificate thumbprint}"
- ]
- }
+ "state": "Enabled",
+ "authenticationName": "client1-authnID",
+ "clientCertificateAuthentication": {
+ "validationScheme": "ThumbprintMatch",
+ "allowedThumbprints": [
+ "{Your client 1 certificate thumbprint}"
+ ]
} } ```
Store the client object in client1.json file. Update the allowedThumbprints fie
Use the az resource command to create the first client. Update the command with your subscription ID, Resource group ID, and a Namespace name. ```azurecli-interactive
-az resource create --resource-type Microsoft.EventGrid/namespaces/clients --id /subscriptions/{Subscription ID}/resourceGroups/{Resource Group}/providers/Microsoft.EventGrid/namespaces/{Namespace Name}/clients/{Client Name} --is-full-object --api-version 2023-06-01-preview --properties @./resources/client1.json
+az resource create --resource-type Microsoft.EventGrid/namespaces/clients --id /subscriptions/{Subscription ID}/resourceGroups/{Resource Group}/providers/Microsoft.EventGrid/namespaces/{Namespace Name}/clients/{Client Name} --api-version 2023-06-01-preview --properties @./resources/client1.json
``` > [!NOTE]
Store the below object in topicspace.json file.
```json {
- "properties": {
- "topicTemplates": [
- "contosotopics/topic1"
- ]
- }
+ "topicTemplates": [
+ "contosotopics/topic1"
+ ]
} ``` Use the az resource command to create the topic space. Update the command with your subscription ID, Resource group ID, namespace name, and a topic space name. ```azurecli-interactive
-az resource create --resource-type Microsoft.EventGrid/namespaces/topicSpaces --id /subscriptions/{Subscription ID}/resourceGroups/{Resource Group}/providers/Microsoft.EventGrid/namespaces/{Namespace Name}/topicSpaces/{Topic Space Name} --is-full-object --api-version 2023-06-01-preview --properties @./resources/topicspace.json
+az resource create --resource-type Microsoft.EventGrid/namespaces/topicSpaces --id /subscriptions/{Subscription ID}/resourceGroups/{Resource Group}/providers/Microsoft.EventGrid/namespaces/{Namespace Name}/topicSpaces/{Topic Space Name} --api-version 2023-06-01-preview --properties @./resources/topicspace.json
``` ## Create PermissionBindings
Store the first permission binding object in permissionbinding1.json file. Repl
```json {
- "properties": {
- "clientGroupName": "$all",
- "permission": "PublisherΓÇ¥,
- "topicSpaceName": "{Your topicspace name}"
- }
+ "clientGroupName": "$all",
+ "permission": "Publisher",
+ "topicSpaceName": "{Your topicspace name}"
} ```
Store the second permission binding object in permissionbinding2.json file. Rep
```json {
- "properties": {
- "clientGroupName": "$all",
- "permission": "SubscriberΓÇ¥,
- "topicSpaceName": "{Your topicspace name}"
- }
+ "clientGroupName": "$all",
+ "permission": "Subscriber",
+ "topicSpaceName": "{Your topicspace name}"
} ```
You need to install the MQTTnet package (version 4.1.4.563) from NuGet to run th
**Sample C# code to connect a client, publish/subscribe MQTT message on a topic**
+> [!IMPORTANT]
+> Please update the client certificate and key pem file paths depending on location of your client certificate files. Also, ensure the client authentication name, topic information match with your configuration.
+ ```csharp using MQTTnet.Client; using MQTTnet;
var mqttClient = new MqttFactory().CreateMqttClient();
var connAck = await mqttClient!.ConnectAsync(new MqttClientOptionsBuilder() .WithTcpServer(hostname, 8883)
- .WithClientId(clientId).WithCredentials(ΓÇ£client1-authnIDΓÇ¥, "") //use client authentication name in the username
+ .WithClientId(clientId)
+ .WithCredentials("client1-authnID", "") //use client authentication name in the username
.WithTls(new MqttClientOptionsBuilderTlsParameters() { UseTls = true,
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
If you don't already have a certificate, you can create a sample certificate usi
1. For publish / subscribe MQTT messages, you can use any of your favorite tools. For demonstration purpose, publish / subscribe is shown using MQTTX app, which can be downloaded from https://mqttx.app/. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-add-client.png" alt-text="Screenshot showing MQTTX app left rail to add new client.":::
-2. Configure client1 with
+
+1. Configure client1 with
- Name as client-name-1 (this value can be anything) - Client ID as client1-sessionID1 (Client ID in CONNECT packet is used to identify the session ID for the client connection) - Username as client1-authnID (Username must match the client authentication name in client metadata)
-3. Update the host name to MQTT hostname from the Overview page of the namespace.
+
+1. Update the host name to MQTT hostname from the Overview page of the namespace.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/event-grid-namespace-overview.png" alt-text="Screenshot showing Event Grid namespace overview page, which has MQTT hostname.":::
-4. Toggle SSL/TLS to ON.
-5. You can leave the SSL Secure ON.
-6. Select Certificate as Self signed.
-7. Provide the path to client.cer.pem file for Client Certificate File.
-8. Provide the path to client.key.pem file for Client key file.
-9. Rest of the settings can be left with predefined default values.
+1. Update the port to 8883
+1. Toggle SSL/TLS to ON.
+1. Toggle SSL Secure to ON, to ensure service certificate validation.
+1. Select Certificate as Self signed.
+1. Provide the path to client.cer.pem file for Client Certificate File.
+1. Provide the path to client.key.pem file for Client key file.
+1. Rest of the settings can be left with predefined default values.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-1.png" alt-text="Screenshot showing client 1 configuration part 1 on MQTTX app."::: :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-2.png" alt-text="Screenshot showing client 1 configuration part 2 on MQTTX app.":::
-10. Select Connect to connect the client to the Event Grid MQTT service.
-11. Repeat the above steps to connect the second client ΓÇ£client2ΓÇ¥, with corresponding authentication information as shown.
+1. Select Connect to connect the client to the Event Grid MQTT service.
+1. Repeat the above steps to connect the second client ΓÇ£client2ΓÇ¥, with corresponding authentication information as shown.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png" alt-text="Screenshot showing client 2 configuration part 1 on MQTTX app.":::
event-grid Mqtt Routing To Event Hubs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-cli.md
az eventgrid event-subscription create --name contosoEventSubscription \
"topicSpacesConfiguration": { "state": "Enabled", "routeTopicResourceId": "/subscriptions/{Subscription ID}/resourceGroups/{Resource Group ID}/providers/Microsoft.EventGrid/topics/{EG Custom Topic Name}"
- }
+ },
+ "isZoneRedundant": true
}, "location": "{region name}" }
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
For more information, see [How to establish multiple sessions for a single clien
#### Handling sessions: - If a client tries to take over another client's active session by presenting its session name, its connection request will be rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request will be rejected.-- If a client resource is deleted without ending its session, other clients won't be able to use its session name until the session expires. For example, If client B creates a session with session name 123 then client B deleted, client A won't be able to connect to session 123 until it expires.
+- If a client resource is deleted without ending its session, other clients won't be able to use that session name until the session expires. For example, If client B creates a session with session name 123 then client B is deleted, client A won't be able to connect to session 123 until the session expires.
## MQTT features Event Grid supports the following MQTT features: ### Quality of Service (QoS)
-Event Grid supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and Event Grid. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
+Event Grid supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and Event Grid. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
+ ### Persistent sessions
-Event Grid supports persistent sessions for MQTT v3.1.1 such that Event Grid preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
+Event Grid supports persistent sessions for MQTT v3.1.1 such that Event Grid preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
+ ### Clean start and session expiry MQTT v5 has introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with Event Grid, discarding any previous session data. Session Expiry allows a client to inform Event Grid when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that may have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.+
+**Maximum session expiry interval:**
+On the Configuration page of an Event Grid namespace, you can configure the maximum session expiry interval at namespace scope. This setting will apply to all the clients within the namespace.
+
+If you are using MQTT v3.1.1, this setting provides the session expiration time and ensures that sessions for inactive clients are terminated once the time limit is reached.
+
+If you are using MQTT v5, this setting will provide the maximum limit for the Session Expiry Interval value. Any Session Expiry Interval chosen above this limit will be negotiated.
+
+The default value for this namespace property is 1 hour and can be extended up to 8 hours.
++ ### User properties
-Event Grid supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile based on your needs. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
+Event Grid supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
+ ### Request-response pattern MQTTv5 introduced fields in the MQTT PUBLISH packet header that provide context for the response message in the request-response pattern. These fields include a response topic and a correlation ID that the responder can use in the response without prior configuration. The response information enables more efficient communication for the standard request-response pattern that is used in command-and-control scenarios.
MQTTv5 introduced fields in the MQTT PUBLISH packet header that provide context
### Message expiry interval: In MQTT v5, message expiry interval allows messages to have a configurable lifespan. The message expiry interval is defined as the time interval between the time a message is published to Event Grid and the time when the Event Grid needs to discard the message if it hasn't been delivered. This feature is useful in scenarios where messages are only valid for a certain amount of time, such as time-sensitive commands, real-time data streaming, or security alerts. By setting a message expiry interval, Event Grid can automatically remove outdated messages, ensuring that only relevant information is available to subscribers. If a message's expiry interval is set to zero, it means the message should never expire.+ ### Topic aliases: In MQTT v5, topic aliases allow a client to use a shorter alias in place of the full topic name in the published message. Event Grid maintains a mapping between the topic alias and the actual topic name. This feature can save network bandwidth and reduce the size of the message header, particularly for topics with long names. It's useful in scenarios where the same topic is repeatedly published in multiple messages, such as in sensor networks. Event Grid supports up to 10 topic aliases. A client can use a Topic Alias field in the PUBLISH packet to replace the full topic name with the corresponding alias.
In MQTT v5, topic aliases allow a client to use a shorter alias in place of the
### Flow control In MQTT v5, flow control refers to the mechanism for managing the rate and size of messages that a client can handle. Flow control can be configured by setting the Maximum Packet Size and Receive Maximum parameters in the CONNECT packet. The Receive Maximum parameter allows the client to limit the number of messages sent by the broker to the number of messages that the client is able to handle. The Maximum Packet Size parameter defines the maximum size of packets that the client can receive. Event Grid has a message size limit of 512 KiB. This feature ensures reliability and stability of the communication for constrained devices with limited processing speed or storage capabilities.+ ### Negative acknowledgments and server-initiated disconnect packet For MQTT v5, Event Grid is able to send negative acknowledgments (NACKs) and server-initiated disconnect packets that provide the client with more information about failures for message delivery or connection. These features help the client diagnose the reason behind a failure and take appropriate mitigating actions. Event Grid uses the reason codes that are defined in the [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Event Grid supports the following use cases:
### MQTT messaging
-Event Grid enables your clients to communicate on [custom MQTT topic names](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901107) using a publish-subscribe messaging model. Event Grid supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. Your MQTT client can connect to Event Grid and publish/subscribe to messages, while Event Grid authenticates your clients, authorizes publish/subscribe requests, and forward messages to interested clients. Event Grid allows you to send MQTT messages to the cloud for data analysis, storage, and visualizations, among other use cases.
+Event Grid enables your clients to communicate on [custom MQTT topic names](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901107) using a publish-subscribe messaging model. Event Grid supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. Event Grid allows you to send MQTT messages to the cloud for data analysis, storage, and visualizations, among other use cases.
+
+The MQTT support in Event Grid is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
:::image type="content" source="media/overview/mqtt-messaging.png" alt-text="High-level diagram of Event Grid that shows bidirectional MQTT communication with publisher and subscriber clients." lightbox="media/overview/mqtt-messaging-high-res.png" border="false":::
expressroute Expressroute Circuit Peerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-circuit-peerings.md
Default quotas and limits apply for every ExpressRoute circuit. Refer to the [Az
* Can only be done using Azure CLI or Azure PowerShell. * Billing type must be **unlimited**. * Changing from *MeteredData* to *UnlimitedData*.
+* Downgrade from Premium SKU to Standard.
#### Unsupported workflow
firewall-manager Configure Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/configure-ddos.md
You should now see that the virtual network has an associated DDoS Protection Pl
## Next steps
-To learn more about DDoS Protection Plans, see:
--- [Azure DDoS Protection overview](../ddos-protection/ddos-protection-overview.md)
+- [Azure DDoS Protection overview](../ddos-protection/ddos-protection-overview.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall-manager Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/deployment-overview.md
The following information applies if you convert an existing virtual network to
## Next steps - [Tutorial: Secure your cloud network with Azure Firewall Manager using the Azure portal](secure-cloud-network.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Azure Firewall Manager has the following known issues:
## Next steps -- [Learn module: Introduction to Azure Firewall Manager](/training/modules/intro-to-azure-firewall-manager/).
+- [Learn module: Introduction to Azure Firewall Manager](/training/modules/intro-to-azure-firewall-manager/)
- Review [Azure Firewall Manager deployment overview](deployment-overview.md)-- Learn about [secured Virtual Hubs](secured-virtual-hub.md).
+- Learn about [secured Virtual Hubs](secured-virtual-hub.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
Policies are billed based on firewall associations. A policy with zero or one fi
## Next steps
-To learn how to deploy an Azure Firewall, see [Tutorial: Secure your cloud network with Azure Firewall Manager using the Azure portal](secure-cloud-network.md).
+- Learn how to deploy an Azure Firewall - [Tutorial: Secure your cloud network with Azure Firewall Manager using the Azure portal](secure-cloud-network.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall-manager Rule Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-hierarchy.md
Security administrators can use base policy to enforce guardrails and block cert
## Next steps
-Learn more about [Azure Firewall policy](policy-overview.md).
-
+- [Learn more about Azure Firewall policy](policy-overview.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
You may configure Virtual WAN to enable inter-region security use cases in the h
- Review Firewall Manager architecture options: [What are the Azure Firewall Manager architecture options?](vhubs-and-vnets.md) - To create a secured virtual hub and use it to secure and govern a hub and spoke network, see [Tutorial: Secure your cloud network with Azure Firewall Manager using the Azure portal](secure-cloud-network.md).
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall-manager Vhubs And Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/vhubs-and-vnets.md
The following table compares these two architecture options and can help you dec
## Next steps - Review [Azure Firewall Manager deployment overview](deployment-overview.md)-- Learn about [secured Virtual Hubs](secured-virtual-hub.md).
+- [Learn about secured Virtual Hubs](secured-virtual-hub.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Azure Firewall is Payment Card Industry (PCI), Service Organization Controls (SO
## Next steps -- [Azure Firewall Premium features](premium-features.md)
+- [Azure Firewall Premium features](premium-features.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
Browse to an Azure Firewall. Under **Monitoring**, select **Metrics**. To view t
Now that you've configured your firewall to collect logs, you can explore Azure Monitor logs to view your data. - [Monitor logs using Azure Firewall Workbook](firewall-workbook.md)- - [Networking monitoring solutions in Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics)
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Untrusted customer signed certificates|Customer signed certificates aren't trust
- [Quickstart: Deploy Azure Firewall with Availability Zones - ARM template](deploy-template.md) - [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md) - [Learn module: Introduction to Azure Firewall](/training/modules/introduction-azure-firewall/)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
For the supported regions for Azure Firewall, see [Azure products available by r
- [Learn about Azure Firewall Premium certificates](premium-certificates.md) - [Deploy and configure Azure Firewall Premium](premium-deploy.md) - [Migrate to Azure Firewall Premium](premium-migrate.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
As a result, there's no need to create an explicit deny rule from VNet-B to VNet
## Next steps - Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- [Learn more about Azure network security](../networking/security/index.yml)
+
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
You can keep your firewall resources to continue testing, or if no longer needed
## Next steps
-[Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+- [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
+- [Learn more about Azure network security](../networking/security/index.yml)
hdinsight Apache Ambari Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-ambari-email.md
Title: 'Tutorial: Configure Apache Ambari email notifications in Azure HDInsight
description: This article describes how to use SendGrid with Apache Ambari for email notifications. Previously updated : 04/11/2022 Last updated : 05/25/2023 #Customer intent: As a HDInsight user, I want to configure Apache Ambari to send email notifications.
hdinsight Cluster Reboot Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-reboot-vm.md
description: Learn how to reboot unresponsive VMs for Azure HDInsight clusters.
Previously updated : 04/21/2022 Last updated : 05/25/2023 # Reboot VMs for HDInsight clusters
hdinsight Apache Domain Joined Run Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-kafka.md
Title: Tutorial - Apache Kafka & Enterprise Security - Azure HDInsight
description: Tutorial - Learn how to configure Apache Ranger policies for Kafka in Azure HDInsight with Enterprise Security Package. Previously updated : 05/09/2023 Last updated : 05/25/2023 # Tutorial: Configure Apache Kafka policies in HDInsight with Enterprise Security Package
hdinsight Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/sample-script.md
Title: Sample script for Azure HDInsight when cluster creation fails
description: Sample script to run when Azure HDInsight cluster creation fails with DomainNotFound error. Previously updated : 04/25/2022 Last updated : 05/25/2023 # Sample Script
hdinsight Troubleshoot Wasbs Storage Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-wasbs-storage-exception.md
Title: The account being accessed does not support http error in Azure HDInsight
description: This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 04/22/2022 Last updated : 05/25/2023 # The account being accessed does not support http error in Azure HDInsight
hdinsight Hdinsight Storage Sharedaccesssignature Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-storage-sharedaccesssignature-permissions.md
description: Learn how to use Shared Access Signatures to restrict HDInsight acc
Previously updated : 04/22/2022 Last updated : 05/25/2023 # Use Azure Blob storage Shared Access Signatures to restrict access to data in HDInsight
hdinsight Apache Spark Streaming Exactly Once https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-exactly-once.md
description: How to set up Apache Spark Streaming to process an event once and o
Previously updated : 04/27/2022 Last updated : 05/25/2023 # Create Apache Spark Streaming jobs with exactly-once event processing
hdinsight Apache Spark Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-overview.md
description: How to use Apache Spark Streaming applications on HDInsight Spark c
Previously updated : 04/28/2022 Last updated : 05/25/2023 # Overview of Apache Spark Streaming
hdinsight Apache Spark Troubleshoot Event Log Requestbodytoolarge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-event-log-requestbodytoolarge.md
Title: RequestBodyTooLarge error from Apache Spark app - Azure HDInsight
description: NativeAzureFileSystem ... RequestBodyTooLarge appears in log for Apache Spark streaming app in Azure HDInsight Previously updated : 07/29/2019 Last updated : 05/25/2023 # RequestBodyTooLarge appear in Apache Spark Streaming application log in HDInsight
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
For an overview of the MedTech service device mapping, see
> [!div class="nextstepaction"] > [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
+For an overview of the MedTech service FHIR destination mapping, see
+
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+
+To learn about the MedTech service frequently asked questions (FAQs), see
+
+> [!div class="nextstepaction"]
+> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+ FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
For more detailed explanation on the differences and guidance on using these opt
Device twins are JSON documents that store device state information (metadata, configurations, and conditions). IoT Hub persists a device twin for each device that connects to it. For more information about device twins, see [Get started with device twins](device-twins-node.md). - [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)] [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
Title: Real-time data visualization of your IoT hub data in a web app
-description: Use a web application to visualize temperature and humidity data that is collected from a sensor and sent to your Iot hub.
+ Title: Tutorial - Visualize IoT data in a web app
+
+description: This tutorial uses a web application to visualize temperature and humidity data that is collected from a sensor and sent to your IoT hub.
Previously updated : 11/18/2021 Last updated : 05/23/2023
-# Visualize real-time sensor data from your Azure IoT hub in a web application
+# Tutorial: Visualize real-time sensor data from your Azure IoT hub in a web application
-![End-to-end diagram](./media/iot-hub-live-data-visualization-in-web-apps/1_iot-hub-end-to-end-diagram.png)
+In this article, you learn how to visualize real-time sensor data that your IoT hub receives with a Node.js web app running on your local computer. After running the web app locally, you can host the web app in Azure App Service.
-In this article, you learn how to visualize real-time sensor data that your IoT hub receives with a Node.js web app running on your local computer. After running the web app locally, you can optionally follow steps to host the web app in Azure App Service. If you want to try to visualize the data in your IoT hub by using Power BI, see [Use Power BI to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-power-bi.md).
+![End-to-end diagram](./media/iot-hub-live-data-visualization-in-web-apps/1_iot-hub-end-to-end-diagram.png)
## Prerequisites
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with Node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstarts. These articles cover the following requirements:
+This tutorial assumes that you already have an IoT hub instance in your Azure subscription and a registered IoT device sending temperature data.
- * An active Azure subscription
- * An Iot hub under your subscription
- * A client application that sends messages to your Iot hub
+The web application sample for this tutorial is written in Node.js. The steps in this article assume a Windows development machine; however, you can also perform these steps on a Linux system in your preferred shell.
-* [Node.js](https://nodejs.org) version 10.6 or later. To check your node version run `node --version`.
+* Use the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) or complete one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md) quickstarts to get a device sending temperature data to IoT Hub. These articles cover the following requirements:
+
+ * An active Azure subscription
+ * An IoT hub under your subscription
+ * A registered device running a client application that sends messages to your IoT hub
-* [Download Git](https://www.git-scm.com/downloads)
+* [Node.js](https://nodejs.org) version 14 or later. To check your node version run `node --version`.
-* The steps in this article assume a Windows development machine; however, you can easily perform these steps on a Linux system in your preferred shell.
+* [Git](https://www.git-scm.com/downloads).
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] ## Add a consumer group to your IoT hub
-[Consumer groups](../event-hubs/event-hubs-features.md#event-consumers) provide independent views into the event stream that enable apps and Azure services to independently consume data from the same Event Hub endpoint. In this section, you add a consumer group to your IoT hub's built-in endpoint that the web app will use to read data from.
+[Consumer groups](../event-hubs/event-hubs-features.md#event-consumers) provide independent views into the event stream that enable apps and Azure services to independently consume data from the same Event Hubs endpoint. In this section, you add a consumer group to your IoT hub's built-in endpoint that the web app uses to read data.
Run the following command to add a consumer group to the built-in endpoint of your IoT hub: ```azurecli-interactive
-az iot hub consumer-group create --hub-name YourIoTHubName --name YourConsumerGroupName
+az iot hub consumer-group create --hub-name YOUR_IOT_HUB_NAME --name YOUR_CONSUMER_GROUP_NAME
```
-Note down the name you choose, you'll need it later in this tutorial.
+Note down the name you choose, you need it later in this tutorial.
## Get a service connection string for your IoT hub IoT hubs are created with several default access policies. One such policy is the **service** policy, which provides sufficient permissions for a service to read and write the IoT hub's endpoints. Run the following command to get a connection string for your IoT hub that adheres to the service policy: ```azurecli-interactive
-az iot hub connection-string show --hub-name YourIotHub --policy-name service
+az iot hub connection-string show --hub-name YOUR_IOT_HUB_NAME --policy-name service
```
-The connection string should look similar to the following:
+The service connection string should look similar to the following example:
```javascript
-"HostName={YourIotHubName}.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey={YourSharedAccessKey}"
+"HostName=YOUR_IOT_HUB_NAME.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey=YOUR_SHARED_ACCESS_KEY"
```
-Note down the service connection string, you'll need it later in this tutorial.
+Note down the service connection string, you need it later in this tutorial.
## Download the web app from GitHub
-Open a command window, and enter the following commands to download the sample from GitHub and change to the sample directory:
-
-```cmd
-git clone https://github.com/Azure-Samples/web-apps-node-iot-hub-data-visualization.git
-cd web-apps-node-iot-hub-data-visualization
-```
+Download or clone the web app sample from GitHub: [web-apps-node-iot-hub-data-visualization](https://github.com/Azure-Samples/web-apps-node-iot-hub-data-visualization.git).
## Examine the web app code
-From the web-apps-node-iot-hub-data-visualization directory, open the web app in your favorite editor. The following shows the file structure viewed in VS Code:
+On your development machine, navigate to the **web-apps-node-iot-hub-data-visualization** directory, then open the web app in your favorite editor. The following shows the file structure viewed in Visual Studio Code:
-![Web app file structure](./media/iot-hub-live-data-visualization-in-web-apps/web-app-files.png)
+![Screenshot that shows the web app file structure.](./media/iot-hub-live-data-visualization-in-web-apps/web-app-files.png)
Take a moment to examine the following files:
-* **Server.js** is a service-side script that initializes the web socket and the Event Hub wrapper class. It provides a callback to the Event Hub wrapper class that the class uses to broadcast incoming messages to the web socket.
+* **server.js** is a service-side script that initializes the web socket and the Event Hubs wrapper class. It provides a callback to the Event Hubs wrapper class that the class uses to broadcast incoming messages to the web socket.
-* **Event-hub-reader.js** is a service-side script that connects to the IoT hub's built-in endpoint using the specified connection string and consumer group. It extracts the DeviceId and EnqueuedTimeUtc from metadata on incoming messages and then relays the message using the callback method registered by server.js.
+* **scripts/event-hub-reader.js** is a service-side script that connects to the IoT hub's built-in endpoint using the specified connection string and consumer group. It extracts the DeviceId and EnqueuedTimeUtc from metadata on incoming messages and then relays the message using the callback method registered by server.js.
-* **Chart-device-data.js** is a client-side script that listens on the web socket, keeps track of each DeviceId, and stores the last 50 points of incoming data for each device. It then binds the selected device data to the chart object.
+* **public/js/chart-device-data.js** is a client-side script that listens on the web socket, keeps track of each DeviceId, and stores the last 50 points of incoming data for each device. It then binds the selected device data to the chart object.
-* **Index.html** handles the UI layout for the web page and references the necessary scripts for client-side logic.
+* **public/https://docsupdatetracker.net/index.html** handles the UI layout for the web page and references the necessary scripts for client-side logic.
## Configure environment variables for the web app To read data from your IoT hub, the web app needs your IoT hub's connection string and the name of the consumer group that it should read through. It gets these strings from the process environment in the following lines in server.js:
-```javascript
-const iotHubConnectionString = process.env.IotHubConnectionString;
-const eventHubConsumerGroup = process.env.EventHubConsumerGroup;
-```
Set the environment variables in your command window with the following commands. Replace the placeholder values with the service connection string for your IoT hub and the name of the consumer group you created previously. Don't quote the strings. ```cmd
-set IotHubConnectionString=YourIoTHubConnectionString
-set EventHubConsumerGroup=YourConsumerGroupName
+set IotHubConnectionString=YOUR_IOT_HUB_CONNECTION_STRING
+set EventHubConsumerGroup=YOUR_CONSUMER_GROUP_NAME
``` ## Run the web app
set EventHubConsumerGroup=YourConsumerGroupName
3. You should see output in the console that indicates that the web app has successfully connected to your IoT hub and is listening on port 3000:
- ![Web app started on console](./media/iot-hub-live-data-visualization-in-web-apps/web-app-console-start.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-web-apps/web-app-console-start.png" alt-text="Screenshot showing the web app sample successfully running in the console.":::
+ ## Open a web page to see data from your IoT hub
Open a browser to `http://localhost:3000`.
In the **Select a device** list, select your device to see a running plot of the last 50 temperature and humidity data points sent by the device to your IoT hub.
-![Web app page showing real-time temperature and humidity](./media/iot-hub-live-data-visualization-in-web-apps/web-page-output.png)
You should also see output in the console that shows the messages that your web app is broadcasting to the browser client:
-![Web app broadcast output on console](./media/iot-hub-live-data-visualization-in-web-apps/web-app-console-broadcast.png)
## Host the web app in App Service
-The [Web Apps feature of Azure App Service](../app-service/overview.md) provides a platform as a service (PAAS) for hosting web applications. Web applications hosted in Azure App Service can benefit from powerful Azure features like additional security, load balancing, and scalability as well as Azure and partner DevOps solutions like continuous deployment, package management, and so on. Azure App Service supports web applications developed in many popular languages and deployed on Windows or Linux infrastructure.
+The [Azure App Service](../app-service/overview.md) provides a platform as a service (PAAS) for hosting web applications. Web applications hosted in App Service can benefit from powerful Azure features like security, load balancing, and scalability as well as Azure and partner DevOps solutions like continuous deployment, package management, and so on. App Service supports web applications developed in many popular languages and deployed on Windows or Linux infrastructure.
-In this section, you provision a web app in App Service and deploy your code to it by using Azure CLI commands. You can find details of the commands used in the [az webapp](/cli/azure/webapp) documentation. Before starting, make sure you've completed the steps to [add a consumer group to your IoT hub](#add-a-consumer-group-to-your-iot-hub), [get a service connection string for your IoT hub](#get-a-service-connection-string-for-your-iot-hub), and [download the web app from GitHub](#download-the-web-app-from-github).
+In this section, you provision a web app in App Service and deploy your code to it by using Azure CLI commands. You can find details of the commands used in the [az webapp](/cli/azure/webapp) documentation.
1. An [App Service plan](../app-service/overview-hosting-plans.md) defines a set of compute resources for an app hosted in App Service to run. In this tutorial, we use the Developer/Free tier to host the web app. With the Free tier, your web app runs on shared Windows resources with other App Service apps, including apps of other customers. Azure also offers App Service plans to deploy web apps on Linux compute resources. You can skip this step if you already have an App Service plan that you want to use.
- To create an App Service plan using the Windows free tier, run the following command. Use the same resource group your IoT hub is in. Your service plan name can contain upper and lower case letters, numbers, and hyphens.
+ To create an App Service plan using the Windows free tier, use the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) command. Use the same resource group your IoT hub is in. Your service plan name can contain upper and lower case letters, numbers, and hyphens.
```azurecli-interactive
- az appservice plan create --name <app service plan name> --resource-group <your resource group name> --sku FREE
+ az appservice plan create --name NEW_NAME_FOR_YOUR_APP_SERVICE_PLAN --resource-group YOUR_RESOURCE_GROUP_NAME --sku FREE
```
-2. Now provision a web app in your App Service plan. The `--deployment-local-git` parameter enables the web app code to be uploaded and deployed from a Git repository on your local machine. Your web app name must be globally unique and can contain upper and lower case letters, numbers, and hyphens. Be sure to specify Node version 10.6 or later for the `--runtime` parameter, depending on the version of the Node.js runtime you are using. You can use the `az webapp list-runtimes` command to get a list of supported runtimes.
+2. Use the [az webapp create](/cli/azure/webapp#az-webapp-create) command to provision a web app in your App Service plan. The `--deployment-local-git` parameter enables the web app code to be uploaded and deployed from a Git repository on your local machine. Your web app name must be globally unique and can contain upper and lower case letters, numbers, and hyphens. Be sure to specify Node version 14 or later for the `--runtime` parameter, depending on the version of the Node.js runtime you're using. You can use the `az webapp list-runtimes` command to get a list of supported runtimes.
```azurecli-interactive
- az webapp create -n <your web app name> -g <your resource group name> -p <your app service plan name> --runtime "node|10.6" --deployment-local-git
+ az webapp create -n NEW_NAME_FOR_YOUR_WEB_APP -g YOUR_RESOURCE_GROUP_NAME -p YOUR_APP_SERVICE_PLAN_NAME --runtime "NODE:14LTS" --deployment-local-git
```
-3. Now add Application Settings for the environment variables that specify the IoT hub connection string and the Event hub consumer group. Individual settings are space delimited in the `-settings` parameter. Use the service connection string for your IoT hub and the consumer group you created previously in this tutorial. Don't quote the values.
+3. Use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command to add application settings for the environment variables that specify the IoT hub connection string and the Event hub consumer group. Individual settings are space-delimited in the `-settings` parameter. Use the service connection string for your IoT hub and the consumer group you created previously in this tutorial.
```azurecli-interactive
- az webapp config appsettings set -n <your web app name> -g <your resource group name> --settings EventHubConsumerGroup=<your consumer group> IotHubConnectionString="<your IoT hub connection string>"
+ az webapp config appsettings set -n YOUR_WEB_APP_NAME -g YOUR_RESOURCE_GROUP_NAME --settings EventHubConsumerGroup=YOUR_CONSUMER_GROUP_NAME IotHubConnectionString="YOUR_IOT_HUB_CONNECTION_STRING"
``` 4. Enable the Web Sockets protocol for the web app and set the web app to receive HTTPS requests only (HTTP requests are redirected to HTTPS). ```azurecli-interactive
- az webapp config set -n <your web app name> -g <your resource group name> --web-sockets-enabled true
- az webapp update -n <your web app name> -g <your resource group name> --https-only true
+ az webapp config set -n YOUR_WEB_APP_NAME -g YOUR_RESOURCE_GROUP_NAME --web-sockets-enabled true
+ az webapp update -n YOUR_WEB_APP_NAME -g YOUR_RESOURCE_GROUP_NAME --https-only true
```
-5. To deploy the code to App Service, you'll use your [user-level deployment credentials](../app-service/deploy-configure-credentials.md). Your user-level deployment credentials are different from your Azure credentials and are used for Git local and FTP deployments to a web app. Once set, they're valid across all of your App Service apps in all subscriptions in your Azure account. If you've previously set user-level deployment credentials, you can use them.
+5. To deploy the code to App Service, you use [user-level deployment credentials](../app-service/deploy-configure-credentials.md). Your user-level deployment credentials are different from your Azure credentials and are used for Git local and FTP deployments to a web app. Once set, they're valid across all of your App Service apps in all subscriptions in your Azure account. If you've previously set user-level deployment credentials, you can use them.
- If you haven't previously set user-level deployment credentials or you can't remember your password, run the following command. Your deployment user name must be unique within Azure, and it must not contain the ΓÇÿ\@ΓÇÖ symbol for local Git pushes. When you're prompted, enter and confirm your new password. The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
+ If you haven't previously set user-level deployment credentials or you can't remember your password, run the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az-webapp-deployment-user-set) command. Your deployment user name must be unique within Azure, and it must not contain the ΓÇÿ\@ΓÇÖ symbol for local Git pushes. When you're prompted, enter and confirm your new password. The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
```azurecli-interactive
- az webapp deployment user set --user-name <your deployment user name>
+ az webapp deployment user set --user-name NAME_FOR_YOUR_USER_CREDENTIALS
``` 6. Get the Git URL to use to push your code up to App Service. ```azurecli-interactive
- az webapp deployment source config-local-git -n <your web app name> -g <your resource group name>
+ az webapp deployment source config-local-git -n YOUR_WEB_APP_NAME -g YOUR_RESOURCE_GROUP_NAME
```
-7. Add a remote to your clone that references the Git repository for the web app in App Service. For \<Git clone URL\>, use the URL returned in the previous step. Run the following command in your command window. Make sure that you're in the sample directory, *web-apps-code-iot-hub-data-visualization*, then run the following command in your command window.
+7. Add a remote to your clone that references the Git repository for the web app in App Service. Replace the `GIT_ENDPOINT_URL` placeholder with the URL returned in the previous step. Make sure that you're in the sample directory, *web-apps-code-iot-hub-data-visualization*, then run the following command in your command window.
```cmd
- git remote add webapp <Git clone URL>
+ git remote add webapp GIT_ENDPOINT_URL
```
-8. To deploy the code to App Service, enter the following command in your command window. Make sure that you are in the sample directory *web-apps-code-iot-hub-data-visualization*. If you are prompted for credentials, enter the user-level deployment credentials that you created in step 5. Push to the main branch of the App Service remote.
+8. To deploy the code to App Service, enter the following command in your command window. Make sure that you are in the sample directory *web-apps-code-iot-hub-data-visualization*. If you're prompted for credentials, enter the user-level deployment credentials that you created in step 5. Push to the main branch of the App Service remote.
- ```cmd
- git push webapp master:master
- ```
+ ```cmd
+ git push webapp master:master
+ ```
-9. The progress of the deployment will update in your command window. A successful deployment will end with lines similar to the following output:
+9. The progress of the deployment updates in your command window. A successful deployment ends with lines similar to the following output:
- ```cmd
- remote:
- remote: Finished successfully.
- remote: Running post deployment command(s)...
- remote: Deployment successful.
- To https://contoso-web-app-3.scm.azurewebsites.net/contoso-web-app-3.git
- 6b132dd..7cbc994 master -> master
- ```
+ ```cmd
+ remote:
+ remote: Finished successfully.
+ remote: Running post deployment command(s)...
+ remote: Deployment successful.
+ To https://contoso-web-app-3.scm.azurewebsites.net/contoso-web-app-3.git
+ 6b132dd..7cbc994 master -> master
+ ```
-10. Run the following command to query the state of your web app and make sure it is running:
+10. Run the following command to query the state of your web app and make sure it's running:
```azurecli-interactive
- az webapp show -n <your web app name> -g <your resource group name> --query state
+ az webapp show -n YOUR_WEB_APP_NAME -g YOUR_RESOURCE_GROUP_NAME --query state
``` 11. Navigate to `https://<your web app name>.azurewebsites.net` in a browser. A web page similar to the one you saw when you ran the web app locally displays. Assuming that your device is running and sending data, you should see a running plot of the 50 most recent temperature and humidity readings sent by the device. ## Troubleshooting
-If you come across any issues with this sample, try the steps in the following sections. If you still have problems, send us feedback at the bottom of this topic.
+If you come across any issues with this sample, try the steps in the following sections. If you still have problems, send us feedback at the bottom of this article.
### Client issues
-* If a device does not appear in the list, or no graph is being drawn, make sure the device code is running on your device.
+* If a device doesn't appear in the list, or no graph is being drawn, make sure the device code is running on your device.
-* In the browser, open the developer tools (in many browsers the F12 key will open it), and find the console. Look for any warnings or errors printed there.
+* In the browser, open the developer tools (in many browsers the F12 key opens it), and find the console. Look for any warnings or errors printed there.
* You can debug client-side script in /js/chat-device-data.js.
If you come across any issues with this sample, try the steps in the following s
* From your web app in Azure portal, under **Development Tools** select **Console** and validate node and npm versions with `node -v` and `npm -v`.
-* If you see an error about not finding a package, you may have run the steps out of order. When the site is deployed (with `git push`) the app service runs `npm install`, which runs based on the current version of node it has configured. If that is changed in configuration later, you'll need to make a meaningless change to the code and push again.
+* If you see an error about not finding a package, you may have run the steps out of order. When the site is deployed (with `git push`) the app service runs `npm install`, which runs based on the current version of node it has configured. If that is changed in configuration later, you need to make a meaningless change to the code and push again.
## Next steps You've successfully used your web app to visualize real-time sensor data from your IoT hub.
-For another way to visualize data from Azure IoT Hub, see [Use Power BI to visualize real-time sensor data from your IoT hub](iot-hub-live-data-visualization-in-power-bi.md).
+For another way to interact with data from Azure IoT Hub, see the following tutorial:
+> [!div class="nextstepaction"]
+> [Use Logic Apps for remote monitoring and notifications](./iot-hub-monitoring-notifications-with-azure-logic-apps.md)
iot-hub Iot Hub Monitoring Notifications With Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md
Last updated 07/18/2019
![End-to-end diagram](media/iot-hub-monitoring-notifications-with-azure-logic-apps/iot-hub-e2e-logic-apps.png) - [Azure Logic Apps](../logic-apps/index.yml) can help you orchestrate workflows across on-premises and cloud services, one or more enterprises, and across various protocols. A logic app begins with a trigger, which is then followed by one or more actions that can be sequenced using built-in controls, such as conditions and iterators. This flexibility makes Logic Apps an ideal IoT solution for IoT monitoring scenarios. For example, the arrival of telemetry data from a device at an IoT Hub endpoint can initiate logic app workflows to warehouse the data in an Azure Storage blob, send email alerts to warn of data anomalies, schedule a technician visit if a device reports a failure, and so on. In this article, you learn how to create a logic app that connects your IoT hub and your mailbox for temperature monitoring and notifications. The client code running on your device sets an application property, `temperatureAlert`, on every telemetry message it sends to your IoT hub. When the client code detects a temperature above 30 C, it sets this property to `true`; otherwise, it sets the property to `false`.
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys-details.md
Previously updated : 02/09/2023 Last updated : 05/17/2023
Key vault key auto-rotation can be set by configuring key auto-rotation policy.
In addition to the key material, the following attributes may be specified. In a JSON Request, the attributes keyword and braces, '{' '}', are required even if there are no attributes specified. -- *enabled*: boolean, optional, default is **true**. Specifies whether the key is enabled and useable for cryptographic operations. The *enabled* attribute is used with *nbf* and *exp*. When an operation occurs between *nbf* and *exp*, it will only be permitted if *enabled* is set to **true**. Operations outside the *nbf* / *exp* window are automatically disallowed, except for [decrypt, unwrap, and verify](#date-time-controlled-operations).-- *nbf*: IntDate, optional, default is now. The *nbf* (not before) attribute identifies the time before which the key MUST NOT be used for cryptographic operations, except for [decrypt, unwrap, and verify](#date-time-controlled-operations). The processing of the *nbf* attribute requires that the current date/time MUST be after or equal to the not-before date/time listed in the *nbf* attribute. Key Vault MAY provide for some small leeway, normally no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value. -- *exp*: IntDate, optional, default is "forever". The *exp* (expiration time) attribute identifies the expiration time on or after which the key MUST NOT be used for cryptographic operation, except for [decrypt, unwrap, and verify](#date-time-controlled-operations). The processing of the *exp* attribute requires that the current date/time MUST be before the expiration date/time listed in the *exp* attribute. Key Vault MAY provide for some small leeway, typically no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
+- *enabled*: boolean, optional, default is **true**. Specifies whether the key is enabled and useable for cryptographic operations. The *enabled* attribute is used with *nbf* and *exp*. When an operation occurs between *nbf* and *exp*, it will only be permitted if *enabled* is set to **true**. Operations outside the *nbf* / *exp* window are automatically disallowed, except for [decrypt, release, unwrap, and verify](#date-time-controlled-operations).
+- *nbf*: IntDate, optional, default is now. The *nbf* (not before) attribute identifies the time before which the key MUST NOT be used for cryptographic operations, except for [decrypt, release, unwrap, and verify](#date-time-controlled-operations). The processing of the *nbf* attribute requires that the current date/time MUST be after or equal to the not-before date/time listed in the *nbf* attribute. Key Vault MAY provide for some small leeway, normally no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
+- *exp*: IntDate, optional, default is "forever". The *exp* (expiration time) attribute identifies the expiration time on or after which the key MUST NOT be used for cryptographic operation, except for [decrypt, release, unwrap, and verify](#date-time-controlled-operations). The processing of the *exp* attribute requires that the current date/time MUST be before the expiration date/time listed in the *exp* attribute. Key Vault MAY provide for some small leeway, typically no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
There are more read-only attributes that are included in any response that includes key attributes:
For more information on IntDate and other data types, see [About keys, secrets,
### Date-time controlled operations
-Not-yet-valid and expired keys, outside the *nbf* / *exp* window, will work for **decrypt**, **unwrap**, and **verify** operations (won't return 403, Forbidden). The rationale for using the not-yet-valid state is to allow a key to be tested before production use. The rationale for using the expired state is to allow recovery operations on data that was created when the key was valid. Also, you can disable access to a key using Key Vault policies, or by updating the *enabled* key attribute to **false**.
+Not-yet-valid and expired keys, outside the *nbf* / *exp* window, will work for **decrypt**, **release**, **unwrap**, and **verify** operations (won't return 403, Forbidden). The rationale for using the not-yet-valid state is to allow a key to be tested before production use. The rationale for using the expired state is to allow recovery operations on data that was created when the key was valid. Also, you can disable access to a key using Key Vault policies, or by updating the *enabled* key attribute to **false**.
For more information on data types, see [Data types](../general/about-keys-secrets-certificates.md#data-types).
key-vault Authorize Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/authorize-azure-resource-manager.md
Previously updated : 11/14/2022 Last updated : 05/25/2023 # Customer intent: As a managed HSM administrator, I want to authorize Azure Resource Manager to perform key management operations via Azure Managed HSM
Azure Managed HSM doesn't trust Azure Resource Manager by default. However, for
For the Azure portal or Azure Resource Manager to interact with Azure Managed HSM in the same way as Azure Key Vault Standard and Premium, an authorized Managed HSM administrator must allow Azure Resource Manager to act on behalf of the user. To change this behavior and allow users to use Azure portal or Azure Resource Manager to create new keys or list keys, make the following Azure Managed HSM setting update: ```azurecli-interactive
-az rest --method PATCH --url "https://<managed-hsm-url>/settings/AllowKeyManagementOperationsThroughARM" --body "{\"value\":\"true\"}" --headers "Content-Type=application/json" --resource "https://managedhsm.azure.net"
+az keyvault setting update --hsm-name <managed-hsm name> --name AllowKeyManagementOperationsThroughARM --value true
``` To disable this trust and revert to the default behavior of Managed HSM: ```azurecli-interactive
-az rest --method PATCH --url "https://<managed-hsm-url>/settings/AllowKeyManagementOperationsThroughARM" --body "{\"value\":\"false\"}" --headers "Content-Type=application/json" --resource "https://managedhsm.azure.net"
+az keyvault setting update --hsm-name <managed-hsm name> --name AllowKeyManagementOperationsThroughARM --value false
``` ## Next steps
lab-services Add Lab Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/add-lab-creator.md
This article shows you how to add users as lab creators to a lab account or lab plan in Azure Lab Services. These users then can create labs and manage those labs.
-## Add Azure AD user account to Lab Creator role
-
-The user account you used to create the lab account or lab plan is automatically able to create labs. Otherwise, the user must be a member of the **Lab Creator** role. If using a lab plan, user must be a **Lab Creator** on the lab plan or the resource group that contains the lab plan. If using a lab account, the user must be a **Lab Creator** on the lab account. If you are planning to use the same user account to create a lab as you did creating the lab plan or lab account, you can skip this step. To use another user account to create a lab, do the following steps:
-
-To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role: For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-1. On the **Lab Plan** resource, select **Access control (IAM)**
+## Prerequisites
-1. Select **Add** > **Add role assignment**.
+- To add lab creators to a lab plan, your Azure account needs to have the [Owner](./concept-lab-services-role-based-access-control.md#owner-role) Azure RBAC role assigned on the resource group. Learn more about the [Azure Lab Services built-in roles](./reliability-in-azure-lab-services.md).
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
-
-1. On the **Role** tab, select the **Lab Creator** role.
-
- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
-
-1. On the **Members** tab, select the user you want to add to the Lab Creators role
+## Add Azure AD user account to Lab Creator role
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
- > [!NOTE]
- > If you are adding a non-Microsoft account user as a lab creator, see [Adding a guest user as a lab creator](#adding-a-guest-user-as-a-lab-creator).
+If you're using a lab account, assign the Lab Creator role on the lab account.
-## Adding a guest user as a lab creator
+## Add a guest user as a lab creator
You might need to add an external user as a lab creator. If that is the case, you'll need to add them as a guest account on the Azure AD attached to the subscription. The following types of email accounts might be used:
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
These actions may be disabled if there no more cores that can be enabled for you
### Prerequisites -- To create a support request, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Support Request Contributor](/azure/role-based-access-control/built-in-roles#support-request-contributor) Azure Active Directory role at the subscription level. ## Request a limit increase
lab-services Classroom Labs Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md
The following conceptual diagram shows how the different Azure Lab Services comp
In Azure Lab Services, a lab plan is an Azure resource and serves as a collection of configurations and settings that apply to all the labs created from it. For example, lab plans specify the networking setup, the list of available VM images and VM sizes, and if [Canvas integration](lab-services-within-canvas-overview.md) can be used for a lab. Learn more about [planning your lab plan settings](./lab-plan-setup-guide.md#plan-your-lab-plan-settings).
-You can associate a lab plan with zero or more [labs](#lab). Each lab uses the configuration settings from the lab plan. Azure Lab Services uses Azure RBAC roles to grant permissions for creating labs. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+You can associate a lab plan with zero or more [labs](#lab). Each lab uses the configuration settings from the lab plan. Azure Lab Services uses Azure RBAC roles to grant permissions for creating labs. Learn more about [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
## Lab
You can further configure the lab behavior by creating [lab schedules](#schedule
When you publish a lab, Azure Lab Services provisions the lab VMs. All lab VMs for a lab share the same configuration and are identical.
-To create labs in Azure Lab Services, your Azure account needs to have the Lab Creator Azure RBAC role, or you need to be the owner of the corresponding lab plan. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+To create labs in Azure Lab Services, your Azure account needs to have the Lab Creator Azure RBAC role, or you need to be the owner of the corresponding lab plan. Learn more about [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
You use the Azure Lab Services website (https://labs.azure.com) to create labs for a lab plan. Alternately, you can also [configure Microsoft Teams integration](./how-to-configure-teams-for-lab-plans.md) or [Canvas integration](./how-to-configure-canvas-for-lab-plans.md) with Azure Lab Services to create labs directly in Microsoft Teams or Canvas.
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
Saving images to a compute gallery and replicating those images incurs extra cos
## Prerequisites -- To change settings for the lab plan, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Lab Services Contributor](/azure/role-based-access-control/built-in-roles#lab-services-contributor) role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+- To change settings for the lab plan, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Lab Services Contributor](/azure/role-based-access-control/built-in-roles#lab-services-contributor) role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
- To attach an Azure compute gallery to a lab plan, your Azure account needs to have the following permissions:
lab-services How To Configure Canvas For Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-canvas-for-lab-plans.md
If you've already configured your course to use Azure Lab Services, learn how yo
## Prerequisites -- An Azure Lab Services lab plan. Follow these steps to [Create a lab plan in the Azure portal](./quick-create-resources.md), if you don't have one yet. - Your Canvas account needs [Admin permissions](https://community.canvaslms.com/t5/Canvas-Basics-Guide/What-is-the-Admin-role/ta-p/78) to add the Azure Lab Services app to Canvas.
lab-services How To Configure Student Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-student-usage.md
Azure Lab Services supports up to 400 users per lab.
## Prerequisites -- To manage users for the lab, your Azure account needs one of the following permissions:-
- - [Lab Creator](/azure/role-based-access-control/built-in-roles#lab-creator), [Lab Contributor](/azure/role-based-access-control/built-in-roles#lab-contributor), or [Lab Operator](/azure/role-based-access-control/built-in-roles#lab-operator) role at the lab plan or resource group level. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
- - [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) at the lab plan or resource group level.
## Add users to a lab from an Azure AD group
To sync a lab with an existing Azure AD group:
Azure Lab Services automatically pulls the list of users from Azure AD, and refreshes the list every 24 hours. Optionally, you can select **Sync** in the **Users** tab to manually synchronize to the latest changes in the Azure AD group.-
-You can now start inviting users to your lab. Learn how to [send invitations to lab users](#send-invitations-to-users).
+
+Users are auto-registered to the lab and VMs are automatically assigned when the VM pool syncs with the Azure AD group. Educators don't need to send invitations and students don't need to register for the lab separately.
### Automatic management of virtual machines based on changes to the Azure AD group
lab-services How To Configure Teams For Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-teams-for-lab-plans.md
For information about creating and managing labs in Microsoft Teams, see [Create
## Prerequisites -- An existing Azure Lab Services lab plan. If you don't have a lab plan yet, see [Set up a lab plan with Azure Lab Services](quick-create-resources.md).+ - The lab plan is created in the same tenant as Microsoft Teams. - To add the Azure Lab Services Teams app to a channel, your account needs to be an owner of the team in Microsoft Teams.-- To add a lab plan to Teams, your account should have the Owner, Lab Creator, or Contributor role on the lab plan.
+- To add a lab plan to Teams, your account should have the [Owner](./concept-lab-services-role-based-access-control.md#owner-role), [Lab Creator](./concept-lab-services-role-based-access-control.md#lab-creator-role), or [Contributor](./concept-lab-services-role-based-access-control.md#contributor-role) role on the lab plan. Learn more about [Azure Lab Services built-in roles](./concept-lab-services-role-based-access-control.md).
## User workflow
lab-services How To Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-bicep.md
In this article, you learn how to create a lab using a Bicep file. For a detail
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Review the Bicep file
lab-services How To Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-bicep.md
In this article, you learn how to create a lab plan using a Bicep file. For a d
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Review the Bicep file
lab-services How To Create Lab Plan Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-powershell.md
In this article, you learn how to use PowerShell and the Azure module to create
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).+ - [Windows PowerShell](/powershell/scripting/windows-powershell/starting-windows-powershell). - [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az). Must be version 7.2 or higher.
lab-services How To Create Lab Plan Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-python.md
In this article, you learn how to use Python and the Azure Python SDK to create
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).+ - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment). - [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)
lab-services How To Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Review the template
lab-services How To Create Lab Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-powershell.md
In this article, you learn how to create a lab using PowerShell and the Azure mo
## Prerequisites -- Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.+ - [Windows PowerShell](/powershell/scripting/windows-powershell/starting-windows-powershell). - [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az). Must be version 7.2 or higher.
lab-services How To Create Lab Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-python.md
In this article, you learn how to create a lab using Python and the Azure Python
## Prerequisites -- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin.+ - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment). - [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)-- Lab plan. To create a lab plan, see [Create a lab plan using Python and the Azure Python libraries (SDK)](how-to-create-lab-plan-python.md). ## Create a lab
lab-services How To Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites
-To complete this quick start, make sure that you have:
--- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin.-- Lab plan. If you haven't create a lab plan, see [Create a lab plan using an ARM template](how-to-create-lab-plan-template.md). ## Review the template
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
To enable nested virtualization on the template VM, you first connect to the VM
## Prerequisites -- A lab plan and one or more labs. Learn how to [Set up a lab plan](quick-create-resources.md) and [Set up a lab](tutorial-setup-lab.md).-- Permission to edit the lab. Learn how to [Add a user to the Lab Creator role](quick-create-resources.md#add-a-user-to-the-lab-creator-role). For more role options, see [Lab Services built-in roles](administrator-guide.md#rbac-roles). ## Enable nested virtualization by using a script
lab-services How To Manage Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-plans.md
Last updated 03/14/2023
In Azure Lab Services, a lab plan is a container for managed lab types such as labs. An administrator sets up a lab plan with Azure Lab Services and provides access to lab owners who can create labs in the plan. This article describes how to create a lab plan, view all lab plans, or delete a lab plan.
+## Prerequisites
++ ## Create a lab plan To create a lab plan, see [Quickstart: Set up resources to create labs](quick-create-resources.md).
lab-services How To Manage Labs Within Canvas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs-within-canvas.md
For more information about adding lab plans to Canvas, see [Configure Canvas to
## Prerequisites -- An Azure Lab Services lab plan. If you don't have a lab plan yet, see For information, see [Set up a lab plan with Azure Lab Services](quick-create-resources.md).+ - The Azure Lab Services Canvas app is enabled. Learn how to [configure Canvas for Azure Lab Services](./how-to-configure-canvas-for-lab-plans.md).-- To create and manage labs, your account should have the Lab Creator, or Lab Contributor role on the lab plan. ## Create a lab in Canvas
lab-services How To Manage Labs Within Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs-within-teams.md
For more information about adding lab plans to Microsoft Teams, see [Configure M
## Prerequisites -- An Azure Lab Services lab plan. If you don't have a lab plan yet, see For information, see [Set up a lab plan with Azure Lab Services](quick-create-resources.md).+ - The Azure Lab Services Teams app is added to your Teams channel. Learn how to [configure Teams for Azure Lab Services](./how-to-configure-teams-for-lab-plans.md).-- To create and manage labs, your account should have the Lab Creator, or Lab Contributor role on the lab plan. ## Create a lab in Teams
lab-services How To Manage Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-labs.md
This article describes how to create and delete labs. It also shows you how to v
## Prerequisites + - One or more labs. To create a lab, see [Tutorial: Create a lab](tutorial-setup-lab.md).-- Permission to edit the lab. To give educators permission to add and create labs, see [Add a user to the Lab Creator role](quick-create-resources.md#add-a-user-to-the-lab-creator-role). For more role options, see [Lab Services built-in roles](administrator-guide.md#rbac-roles). ## View all labs
lab-services How To Request Capacity Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-request-capacity-increase.md
Learn more about the general [process for creating Azure support requests](/azur
### Prerequisites
-To create a support request, your Azure account must have one of the following roles at the Azure subscription level:
- ## Prepare to submit a request
lab-services Lab Plan Setup Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-plan-setup-guide.md
Last updated 11/23/2021
# Lab plan setup guide - If you're an administrator, before you set up your Azure Lab Services environment, you first need to create a *lab plan* within your Azure subscription. A lab plan is associated one or more labs, and it takes only a few minutes to set up. This guide includes three sections:
This guide includes three sections:
- Plan your lab plan settings - Set up your lab plan + ## Prerequisites The following sections outline what you need to do before you can set up a lab plan.
To plan your lab plan settings, consider the following questions.
Your school's IT administrators ordinarily take on the Owner and Contributor roles for a lab plan. These roles are responsible for managing the policies that apply to all the labs in the lab plan. The person who creates the lab plan is automatically an Owner. You can add additional Owners and Contributors from the Azure Active Directory (Azure AD) tenant that's associated with your subscription.
-For more information about the lab plan Owner and Contributor roles, see [RBAC roles](./administrator-guide.md#rbac-roles).
+For more information about the lab plan Owner and Contributor roles, see [RBAC roles](./concept-lab-services-role-based-access-control.md).
[!INCLUDE [Select a tenant](./includes/multi-tenant-support.md)]
Lab users see only a single list of the VMs that they have access to across Azur
You may choose to have your IT team or faculty members create labs. To create labs, you then assign these people to the Lab Creator role within the lab plan. You ordinarily assign this role from the Azure AD tenant that's associated with your school subscription. Whoever creates a lab is automatically assigned as the Owner of the lab.
-For more information about the Lab Creator role, see [RBAC roles](./administrator-guide.md#rbac-roles).
+For more information about the Lab Creator role, see [RBAC roles](./concept-lab-services-role-based-access-control.md).
### Who will be allowed to own and manage labs? You can also choose to have IT and faculty members own\manage labs *without* giving them the ability to create labs. In this case, users from your subscription's Azure AD tenant are assigned either the Owner or Contributor for existing labs.
-For more information about the lab Owner and Contributor roles, see [RBAC roles](./administrator-guide.md#rbac-roles).
+For more information about the lab Owner and Contributor roles, see [RBAC roles](./concept-lab-services-role-based-access-control.md).
### Do you want to save images and share them across labs?
lab-services Migrate To 2022 Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/migrate-to-2022-update.md
This checklist highlights the sequence at a high-level:
To begin using the update, you'll need to create a lab plan.
-If you don't already have a lab plan, you can create a temporary lab plan for requesting capacity, and delete the plan afterwards. Because capacity is assigned to your subscription, it's not affected when you create or delete lab plans. The first time you create a lab plan, a special Microsoft-managed Azure subscription is automatically created. This subscription isnΓÇÖt visible to you and is used internally to assign your [dedicated capacity](/azure/lab-services/capacity-limits#per-customer-assigned-capacity).
+If you don't already have a lab plan, you can create a temporary lab plan for requesting capacity, and delete the plan afterwards. Because capacity is assigned to your subscription, it's not affected when you create or delete lab plans. The first time you create a lab plan, a special Microsoft-managed Azure subscription is automatically created. This subscription isnΓÇÖt visible to you and is used internally to assign your [dedicated capacity](./capacity-limits.md#per-customer-assigned-capacity).
-- [Create a lab plan](/azure/lab-services/tutorial-setup-lab-plan).
+- [Create a lab plan](./tutorial-setup-lab-plan.md).
- This lab plan can be deleted once capacity is requested. - You don't need to enable advanced networking or images; or assign permissions. - You can select any region.
And, since lab accounts and lab plans cannot share capacity, you'll need to requ
## 2. Request capacity
-As a customer, you're now assigned your own [dedicated VM cores quota](/azure/lab-services/capacity-limits#per-customer-assigned-capacity). This quota is assigned per-subscription. The initial number of VM cores assigned to your subscription is limited, so you'll need to request a core limit increase. Even if you're already using lab accounts in the current version of Azure Lab Services, you'll still need to request a core limit increase; existing cores in a lab account won't be available when you create a lab plan.
+As a customer, you're now assigned your own [dedicated VM cores quota](./capacity-limits.md#per-customer-assigned-capacity). This quota is assigned per-subscription. The initial number of VM cores assigned to your subscription is limited, so you'll need to request a core limit increase. Even if you're already using lab accounts in the current version of Azure Lab Services, you'll still need to request a core limit increase; existing cores in a lab account won't be available when you create a lab plan.
1. Verify the capacity available in your subscription by [determining the current usage and quota](./how-to-determine-your-quota-usage.md).
-1. [Request a core limit increase](/azure/lab-services/how-to-request-capacity-increase?tabs=Labplans).
+1. [Request a core limit increase](./how-to-request-capacity-increase.md?tabs=Labplans).
1. If you created a temporary lab plan, you can delete it at this point. Deleting lab plans has no impact on your subscription or the capacity you have available. Capacity is assigned to your subscription. ### Tips for requesting a capacity increase
For example, when you move from lab accounts to lab plans, you should first requ
## 3. Configure shared resources
-You can reuse the same Azure Compute Gallery and licensing servers that you use with your lab accounts. Optionally, you can also [configure more licensing servers](/azure/lab-services/how-to-create-a-lab-with-shared-resource) and galleries based on your needs. For VMs that require access to a licensing server, you'll create lab plans with [advanced networking](/azure/lab-services/how-to-connect-vnet-injection#connect-the-virtual-network-during-lab-plan-creation) enabled as shown in the next step.
+You can reuse the same Azure Compute Gallery and licensing servers that you use with your lab accounts. Optionally, you can also [configure more licensing servers](./how-to-create-a-lab-with-shared-resource.md) and galleries based on your needs. For VMs that require access to a licensing server, you'll create lab plans with [advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) enabled as shown in the next step.
## 4. Create additional lab plans While you're waiting for capacity to be assigned, you can continue creating lab plans that will be used for setting up your labs.
-1. [Create and configure lab plans](/azure/lab-services/tutorial-setup-lab-plan).
- - If you plan to use a license server, don't forget to enable [advanced networking](/azure/lab-services/how-to-connect-vnet-injection#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plans.
- - The lab planΓÇÖs resource group name is significant because educators will select the resource group to [create a lab](/azure/lab-services/tutorial-setup-lab#create-a-lab).
+1. [Create and configure lab plans](./tutorial-setup-lab-plan.md).
+ - If you plan to use a license server, don't forget to enable [advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plans.
+ - The lab planΓÇÖs resource group name is significant because educators will select the resource group to [create a lab](./tutorial-setup-lab.md#create-a-lab).
- Likewise, the lab plan name is important. If more than one lab plan is in the resource group, educators will see a dropdown to choose a lab plan when they create a lab.
-1. [Assign permissions](/azure/lab-services/tutorial-setup-lab-plan#add-a-user-to-the-lab-creator-role) to educators that will create labs.
-1. Enable [Azure Marketplace images](/azure/lab-services/specify-marketplace-images).
-1. [Configure regions for labs](/azure/lab-services/create-and-configure-labs-admin). You should enable your lab plans to use the regions that you specified in your capacity request.
-1. Optionally, [attach an Azure Compute Gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery).
-1. Optionally, configure [integration with Canvas](/azure/lab-services/lab-services-within-canvas-overview) including [adding the app and linking lab plans](/azure/lab-services/how-to-get-started-create-lab-within-canvas). Alternately, configure [integration with Teams](/azure/lab-services/lab-services-within-teams-overview) by [adding the app to Teams groups](/azure/lab-services/how-to-get-started-create-lab-within-teams).
+1. [Assign permissions](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role) to educators that will create labs.
+1. Enable [Azure Marketplace images](./specify-marketplace-images.md).
+1. [Configure regions for labs](./create-and-configure-labs-admin.md). You should enable your lab plans to use the regions that you specified in your capacity request.
+1. Optionally, [attach an Azure Compute Gallery](./how-to-attach-detach-shared-image-gallery.md).
+1. Optionally, configure [integration with Canvas](./lab-services-within-canvas-overview.md) including [adding the app and linking lab plans](./how-to-get-started-create-lab-within-canvas.md). Alternately, configure [integration with Teams](./lab-services-within-teams-overview.md) by [adding the app to Teams groups](./how-to-get-started-create-lab-within-teams.md).
If you're moving from lab accounts, the following table provides guidance on how to map your lab accounts to lab plans: |Lab account configuration|Lab plan configuration| |||
-|[Virtual network peering](/azure/lab-services/how-to-connect-peer-virtual-network#configure-at-the-time-of-lab-account-creation)|Lab plans can reuse the same virtual network as lab accounts. </br> - [Setup advanced networking](/azure/lab-services/how-to-connect-vnet-injection#connect-the-virtual-network-during-lab-plan-creation) when you create the lab plan.|
-|[Role assignments](/azure/lab-services/administrator-guide-1#manage-identity) </br> - Lab account owner\contributor. </br> - Lab creator\owner\contributor.|Lab plans include new specialized roles. </br>1. [Review roles](/azure/lab-services/administrator-guide#rbac-roles). </br>2. [Assign permissions](/azure/lab-services/tutorial-setup-lab-plan#add-a-user-to-the-lab-creator-role).|
-|Enabled Marketplace images. </br> - Lab accounts only support Gen1 images from the Marketplace.|Lab plans include settings to enable [Azure Marketplace images](/azure/lab-services/specify-marketplace-images). </br> - Lab plans support Gen1 and Gen2 Marketplace images, so the list of images will be different than what you would see if using lab accounts.|
-|[Location](/azure/lab-services/how-to-manage-lab-accounts#create-a-lab-account) </br> - Labs are automatically created within the same geolocation as the lab account. </br> - You can't specify the exact region where a lab is created. |Lab plans enable specific control over which regions labs are created. </br> - [Configure regions for labs](/azure/lab-services/create-and-configure-labs-admin).|
-|[Attached Azure Compute Gallery (Shared Image Gallery)](/azure/lab-services/how-to-attach-detach-shared-image-gallery-1)|Lab plans can be attached to the same gallery used by lab accounts. </br>1. [Attach an Azure Compute Gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery). </br>2. Ensure that you [enable images for the lab plan](/azure/lab-services/how-to-attach-detach-shared-image-gallery#enable-and-disable-images).|
-|Teams integration|Configure lab plans with [Teams integration](/azure/lab-services/lab-services-within-teams-overview) by [adding the app to Teams groups](/azure/lab-services/how-to-get-started-create-lab-within-teams).|
-|[Firewall settings](/azure/lab-services/how-to-configure-firewall-settings-1) </br> - Create inbound and outbound rules for the lab's public IP address and the port range 49152 - 65535.|[Firewall settings](/azure/lab-services/how-to-configure-firewall-settings) </br> - Create inbound and outbound rules for the lab's public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999.|
+|[Virtual network peering](./how-to-connect-peer-virtual-network.md#configure-at-the-time-of-lab-account-creation)|Lab plans can reuse the same virtual network as lab accounts. </br> - [Setup advanced networking](./how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when you create the lab plan.|
+|[Role assignments](./concept-lab-services-role-based-access-control.md) </br> - Lab account owner\contributor. </br> - Lab creator\owner\contributor.|Lab plans include new specialized roles. </br>1. [Review roles](./concept-lab-services-role-based-access-control.md). </br>2. [Assign permissions](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role).|
+|Enabled Marketplace images. </br> - Lab accounts only support Gen1 images from the Marketplace.|Lab plans include settings to enable [Azure Marketplace images](./specify-marketplace-images.md). </br> - Lab plans support Gen1 and Gen2 Marketplace images, so the list of images will be different than what you would see if using lab accounts.|
+|[Location](./how-to-manage-lab-accounts.md#create-a-lab-account) </br> - Labs are automatically created within the same geolocation as the lab account. </br> - You can't specify the exact region where a lab is created. |Lab plans enable specific control over which regions labs are created. </br> - [Configure regions for labs](./create-and-configure-labs-admin.md).|
+|[Attached Azure Compute Gallery (Shared Image Gallery)](./how-to-attach-detach-shared-image-gallery-1.md)|Lab plans can be attached to the same gallery used by lab accounts. </br>1. [Attach an Azure Compute Gallery](./how-to-attach-detach-shared-image-gallery.md). </br>2. Ensure that you [enable images for the lab plan](./how-to-attach-detach-shared-image-gallery.md#enable-and-disable-images).|
+|Teams integration|Configure lab plans with [Teams integration](./lab-services-within-teams-overview.md) by [adding the app to Teams groups](./how-to-get-started-create-lab-within-teams.md).|
+|[Firewall settings](./how-to-configure-firewall-settings-1.md) </br> - Create inbound and outbound rules for the lab's public IP address and the port range 49152 - 65535.|[Firewall settings](./how-to-configure-firewall-settings.md) </br> - Create inbound and outbound rules for the lab's public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999.|
## 5. Validate images
-Each of the VM sizes has been remapped to use a newer [Azure VM Compute SKU](/azure/lab-services/administrator-guide#vm-sizing). If you're using an [attached compute gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery), validate each of your customized images with the new VM Compute SKU by publishing a lab with the image and testing common student workloads. Before creating labs, verify that each image in the compute gallery is replicated to the same regions enabled in your lab plans.
+Each of the VM sizes has been remapped to use a newer [Azure VM Compute SKU](./administrator-guide.md#vm-sizing). If you're using an [attached compute gallery](./how-to-attach-detach-shared-image-gallery.md), validate each of your customized images with the new VM Compute SKU by publishing a lab with the image and testing common student workloads. Before creating labs, verify that each image in the compute gallery is replicated to the same regions enabled in your lab plans.
## 6. Create and publish labs
-Once you have capacity assigned to your subscription, you can [create and publish](/azure/lab-services/tutorial-setup-lab) representative labs to validate the educator and student experience.
+Once you have capacity assigned to your subscription, you can [create and publish](./tutorial-setup-lab.md) representative labs to validate the educator and student experience.
Creating a selection of representative labs as a proof of concept is an optional but highly recommended step, which enables you to validate performance based on common student workloads. After a successful proof of concept is completed, you can submit capacity requests based on your immediate upcoming need, building incrementally to your full capacity requirement over time. ### Lab strategies
You cannot migrate existing labs to the August 2022 Update. Instead, you must cr
## 7. Update cost management reports
-Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the August 2022 Update. [Built-in and custom tags](/azure/lab-services/cost-management-guide#understand-the-entries) allow for [grouping](/azure/cost-management-billing/costs/quick-acm-cost-analysis) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](/azure/lab-services/cost-management-guide).
+Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the August 2022 Update. [Built-in and custom tags](./cost-management-guide.md#understand-the-entries) allow for [grouping](/azure/cost-management-billing/costs/quick-acm-cost-analysis) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](./cost-management-guide.md).
## Next steps
lab-services Quick Create Connect Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-connect-lab.md
After you complete this quickstart, you'll have a lab that you can connect to an
## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)]- ## Create a lab
lab-services Quick Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-resources.md
After you complete this quickstart, you'll have a lab plan that you can use for
## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)] ## Create a lab plan
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
In this tutorial, you learn how to:
## Prerequisites
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
## Create a resource group
lab-services Tutorial Setup Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab.md
In this tutorial, you learn how to:
## Prerequisites -- You have access to an existing lab plan. If you don't have access to a lab plan, ask an administrator to [create a lab plan and grant you access](./quick-create-resources.md).--- To create labs, your Azure account must have either of the following Azure AD roles at the lab plan or resource group level. Learn more about the [Azure Lab Services roles](./administrator-guide.md#rbac-roles).
- - Lab Creator
- - Lab Operator
- - Owner
- - Contributor
## Create a lab
After you add users to the lab, they can register for the lab by using a registr
You've successfully created a customized lab for a classroom training, created a recurring lab schedule, and invited users to register for the lab. Next, lab users can now connect to their lab virtual machine by using remote desktop.
-In this tutorial, you have the Lab Creator Azure AD role to let you create labs for a lab plan. Depending on your organization, the responsibilities for creating lab plans and labs might be assigned to different people or teams. Learn more about [mapping permissions across your organization](./classroom-labs-scenarios.md#mapping-organizational-roles-to-permissions).
+In this tutorial, you have the Lab Creator Azure RBAC role to let you create labs for a lab plan. Depending on your organization, the responsibilities for creating lab plans and labs might be assigned to different people or teams. Learn more about [mapping permissions across your organization](./classroom-labs-scenarios.md#mapping-organizational-roles-to-permissions).
> [!div class="nextstepaction"] > [Connect to a lab virtual machine](./tutorial-connect-lab-virtual-machine.md)
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
To help address BizTalk customers' needs in migrating their workloads and interf
| Timeframe | Functionality investments | |--|| | Short term | - [XSLT + .NET Framework support (Public Preview)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120) <br>- [SWIFT MT encoder and decoder (Public Preview)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/announcement-public-preview-of-swift-message-processing-using/ba-p/3670014) <br>- Call custom .NET Framework code from Azure Logic Apps (Standard) |
-| Medium term | - EDI and integration account enhancements <br>- Native XML support <br>- WCF and SOAP support <br>- BizTalk Rules Engine support |
+| Medium term | - EDI and integration account enhancements <br>- Native XML support <br>- WCF and SOAP support <br>- Business Rules Engine support |
| Long term | Business event tracking | To stay updated about the latest investments, subscribe to the [Integrations on Azure Blog - Tech Community](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/bg-p/IntegrationsonAzureBlog).
You've learned more about how Azure Integration Services compares to BizTalk Ser
> [!div class="nextstepaction"] > [Choose the best Azure Integration Services offerings for your scenario](azure-integration-services-choose-capabilities.md) >
-> [Migration approaches for BizTalk Server to Azure Integration Services](biztalk-server-azure-integration-services-migration-approaches.md)
+> [Migration approaches for BizTalk Server to Azure Integration Services](biztalk-server-azure-integration-services-migration-approaches.md)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
So now you'll add a trigger that starts your workflow.
## Add a trigger
-This example workflow starts with the [built-in Request trigger](../connectors/connectors-native-reqres.md) named **When an HTTP request is received**. This trigger creates an endpoint that other services or logic app workflows can call and waits for those inbound calls or requests to arrive. Built-in operations run natively and directly within the Azure Logic Apps runtime.
+This example workflow starts with the [built-in Request trigger](../connectors/connectors-native-reqres.md) named **When a HTTP request is received**. This trigger creates an endpoint that other services or logic app workflows can call and waits for those inbound calls or requests to arrive. Built-in operations run natively and directly within the Azure Logic Apps runtime.
1. On the workflow designer, make sure that your blank workflow is open and that the **Add a trigger** prompt is selected on the designer surface.
-1. By using **request** as the search term, [follow these steps to add the built-in Request trigger named **When an HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) to your workflow.
+1. By using **request** as the search term, [follow these steps to add the built-in Request trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) to your workflow.
When the trigger appears on the designer, the trigger's information pane opens to show the trigger's properties, settings, and other actions.
This example workflow continues with the [Office 365 Outlook managed connector a
1. On the designer, under the trigger that you added, select the plus sign (**+**) > **Add an action**.
- The **Browse operations** pane opens so that you can select the next action.
+ The **Add an action** pane opens so that you can select the next action.
1. By using **office send an email** as the search term, [follow these steps to add the Office 365 Outlook action that's named **Send an email (V2)**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action) to your workflow.
- ![Screenshot showing the designer, the picker pane, and the selected Office 365 Outlook named Send an email.](./media/create-single-tenant-workflows-azure-portal/find-send-email-action.png)
- 1. In the action's information pane, on the **Create Connection** tab, select **Sign in** so that you can create a connection to your email account. ![Screenshot showing the designer, the pane named Send an email (V2) with Sign in button.](./media/create-single-tenant-workflows-azure-portal/send-email-action-sign-in.png)
To find the fully qualified domain names (FQDNs) for connections, follow these s
In this example, the workflow runs when the Request trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, Azure Logic Apps automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
-1. On the workflow designer, select the Request trigger that's named **When an HTTP request is received**.
+1. On the workflow designer, select the Request trigger that's named **When a HTTP request is received**.
1. After the information pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Title: Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code
-description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using Visual Studio Code.
+ Title: Create Standard workflows with Visual Studio Code
+description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps using Visual Studio Code.
ms.suite: integration Previously updated : 04/04/2023 Last updated : 05/23/2023 # Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
-# Create a Standard logic app workflow for single-tenant Azure Logic Apps using Visual Studio Code
+# Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
For local development in Visual Studio Code, you need to set up a local data sto
1. Before you run your logic app workflow, make sure to start the emulator.
-For more information, review the [Azurite documentation](https://github.com/Azure/Azurite#azurite-v3).
+ 1. In Visual Studio Code, from the **View** menu, select **Command Palette**.
+
+ 1. After the command palette appears, enter **Azurite: Start**.
+
+For more information, review the [documentation for the Azurite extension in Visual Studio Code](https://github.com/Azure/Azurite#visual-studio-code-extension).
### Tools
Install the following tools and versions for your specific operating system: Win
1. In Visual Studio Code, on the left toolbar, select **Extensions**.
- 1. In the extensions search box, enter `azure logic apps standard`. From the results list, select **Azure Logic Apps (Standard)** **>** **Install**.
+ 1. In the extensions search box, enter **azure logic apps standard**. From the results list, select **Azure Logic Apps (Standard)** **>** **Install**.
After the installation completes, the extension appears in the **Extensions: Installed** list.
- ![Screenshot showing Visual Studio Code with the Azure Logic Apps (Standard) extension installed](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
+ ![Screenshot shows Visual Studio Code with Azure Logic Apps (Standard) extension installed.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
> [!TIP] > If the extension doesn't appear in the installed list, try restarting Visual Studio Code.
- Currently, you can have both Consumption (multi-tenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. Visual Studio Code shows all the deployed logic apps in your Azure subscription, but organizes your apps under each extension, **Azure Logic Apps (Consumption)** and **Azure Logic Apps (Standard)**.
+ Currently, you can have both Consumption (multi-tenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. In Visual Studio Code, the Azure window shows all the Azure-deployed and hosted logic apps in your Azure subscription, but organizes your apps in the following ways:
+
+ * **Logic Apps (Consumption)** section: All the Consumption logic apps in your subscription
+ * **Resources** section: All the Standard logic apps in your subscription. Previously, these logic apps appeared in the **Logic Apps (Standard)** section, which has now moved into the **Resources** section.
* To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js version 16.x.x unless a newer version is already installed](https://nodejs.org/en/download/releases/).
Install the following tools and versions for your specific operating system: Win
For example, you can find the **Azure Logic Apps Standard: Project Runtime** setting here or use the search box to find other settings:
- ![Screenshot that shows Visual Studio Code settings for "Azure Logic Apps (Standard)" extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-settings.png)
+ ![Screenshot shows Visual Studio Code settings for Azure Logic Apps (Standard) extension.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-settings.png)
<a name="connect-azure-account"></a>
Install the following tools and versions for your specific operating system: Win
1. On the Visual Studio Code Activity Bar, select the Azure icon.
- ![Screenshot that shows Visual Studio Code Activity Bar and selected Azure icon.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-azure-icon.png)
-
-1. In the Azure pane, under **Logic Apps (Standard)**, select **Sign in to Azure**. When the Visual Studio Code authentication page appears, sign in with your Azure account.
+ ![Screenshot shows Visual Studio Code Activity Bar and selected Azure icon.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-azure-icon.png)
- ![Screenshot that shows Azure pane and selected link for Azure sign in.](./media/create-single-tenant-workflows-visual-studio-code/sign-in-azure-subscription.png)
+1. In the Azure window, under **Resources**, select **Sign in to Azure**. When the Visual Studio Code authentication page appears, sign in with your Azure account.
- After you sign in, the Azure pane shows the subscriptions in your Azure account. If you also have the publicly released extension, you can find any logic apps that you created with that extension in the **Logic Apps (Consumption)** section, not the **Logic Apps (Standard)** section.
+ ![Screenshot shows Azure window and selected link for Azure sign in.](./media/create-single-tenant-workflows-visual-studio-code/sign-in-azure-subscription.png)
- If the expected subscriptions don't appear, or you want the pane to show only specific subscriptions, follow these steps:
+ After you sign in, the Azure window shows the Azure subscriptions associated with your Azure account. If the expected subscriptions don't appear, or you want the pane to show only specific subscriptions, follow these steps:
1. In the subscriptions list, move your pointer next to the first subscription until the **Select Subscriptions** button (filter icon) appears. Select the filter icon.
- ![Screenshot that shows Azure pane and selected filter icon.](./media/create-single-tenant-workflows-visual-studio-code/filter-subscription-list.png)
+ ![Screenshot shows Azure window with subscriptions and selected filter icon.](./media/create-single-tenant-workflows-visual-studio-code/filter-subscription-list.png)
Or, in the Visual Studio Code status bar, select your Azure account.
Before you can create your logic app, create a local project so that you can man
1. In Visual Studio Code, close all open folders.
-1. In the Azure pane, next to **Logic Apps (Standard)**, select **Create New Project** (icon that shows a folder and lightning bolt).
+1. In the Azure window, on the **Workspace** section toolbar, select **Create New Project** (folder icon).
- ![Screenshot that shows Azure pane toolbar with "Create New Project" selected.](./media/create-single-tenant-workflows-visual-studio-code/create-new-project-folder.png)
+ ![Screenshot shows Azure window and Workspace toolbar with Create New Project selected.](./media/create-single-tenant-workflows-visual-studio-code/create-new-project-folder.png)
1. If Windows Defender Firewall prompts you to grant network access for `Code.exe`, which is Visual Studio Code, and for `func.exe`, which is the Azure Functions Core Tools, select **Private networks, such as my home or work network** **>** **Allow access**. 1. Browse to the location where you created your project folder, select that folder and continue.
- ![Screenshot that shows "Select Folder" dialog box with a newly created project folder and the "Select" button selected.](./media/create-single-tenant-workflows-visual-studio-code/select-project-folder.png)
+ ![Screenshot shows Select Folder box and new project folder with Select button selected.](./media/create-single-tenant-workflows-visual-studio-code/select-project-folder.png)
1. From the templates list that appears, select either **Stateful Workflow** or **Stateless Workflow**. This example selects **Stateful Workflow**.
- ![Screenshot that shows the workflow templates list with "Stateful Workflow" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-stateful-stateless-workflow.png)
+ ![Screenshot shows workflow templates list with Stateful Workflow selected.](./media/create-single-tenant-workflows-visual-studio-code/select-stateful-stateless-workflow.png)
-1. Provide a name for your workflow and press Enter. This example uses **Fabrikam-Stateful-Workflow** as the name.
+1. Provide a name for your workflow and press Enter. This example uses **Stateful-Workflow** as the name.
- ![Screenshot that shows the "Create new Stateful Workflow (3/4)" box and "Fabrikam-Stateful-Workflow" as the workflow name.](./media/create-single-tenant-workflows-visual-studio-code/name-your-workflow.png)
+ ![Screenshot shows Create new Stateful Workflow (3/4) box and workflow name, Stateful-Workflow.](./media/create-single-tenant-workflows-visual-studio-code/name-your-workflow.png)
> [!NOTE] > You might get an error named **azureLogicAppsStandard.createNewProject** with the error message, > **Unable to write to Workspace Settings because azureFunctions.suppressProject is not a registered configuration**. > If you do, try installing the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions), either directly from the Visual Studio Marketplace or from inside Visual Studio Code.
- Visual Studio Code finishes creating your project, and opens the **workflow.json** file for your workflow in the code editor.
+1. If Visual Studio Code prompts you to open your project in the current Visual Studio Code or in a new Visual Studio Code window, select **Open in current window**. Otherwise, select **Open in new window**.
- > [!NOTE]
- > If you're prompted to select how to open your project, select **Open in current window**
- > if you want to open your project in the current Visual Studio Code window. To open a new
- > instance for Visual Studio Code, select **Open in new window**.
+ Visual Studio Code finishes creating your project.
-1. From the Visual Studio toolbar, open the Explorer pane, if not already open.
+1. From the Visual Studio Activity Bar, open the Explorer pane, if not already open.
The Explorer pane shows your project, which now includes automatically generated project files. For example, the project has a folder that shows your workflow's name. Inside this folder, the **workflow.json** file contains your workflow's underlying JSON definition.
- ![Screenshot that shows the Explorer pane with project folder, workflow folder, and "workflow.json" file.](./media/create-single-tenant-workflows-visual-studio-code/local-project-created.png)
+ ![Screenshot shows Explorer pane with project folder, workflow folder, and workflow.json file.](./media/create-single-tenant-workflows-visual-studio-code/local-project-created.png)
[!INCLUDE [Visual Studio Code - logic app project structure](../../includes/logic-apps-single-tenant-project-structure-visual-studio-code.md)]
The authoring capability is currently available only in Visual Studio Code, but
> [!IMPORTANT] > This action is a one-way operation that you can't undo.
-1. In the Explorer pane, at your project's root, move your mouse pointer over any blank area below all the other files and folders, open the shortcut menu, and select **Convert to Nuget-based Logic App project**.
+1. In the Explorer pane, at your project's root, move your mouse pointer over any blank area below all the other files and folders, open the shortcut menu, and select **Convert to NuGet-based Logic App project**.
- ![Screenshot that shows that shows Explorer pane with the project's shortcut menu opened from a blank area in the project window.](./media/create-single-tenant-workflows-visual-studio-code/convert-logic-app-project.png)
+ ![Screenshot shows Explorer pane with project shortcut menu opened from blank area in project window.](./media/create-single-tenant-workflows-visual-studio-code/convert-logic-app-project.png)
1. When the prompt appears, confirm the project conversion.
The authoring capability is currently available only in Visual Studio Code, but
## Open the workflow definition file in the designer
-1. Expand the project folder for your workflow. Open the **workflow.json** file's shortcut menu, and select **Open in Designer**.
+1. Expand your workflow's project folder, which is named **Stateful-Workflow** in this example, and open the **workflow.json** file.
+
+1. Open the **workflow.json** file's shortcut menu, and select **Open Designer**.
- ![Screenshot that shows Explorer pane and shortcut window for the workflow.json file with "Open in Designer" selected.](./media/create-single-tenant-workflows-visual-studio-code/open-definition-file-in-designer.png)
+ ![Screenshot shows Explorer pane, workflow.json file shortcut menu, and Open Designer selected.](./media/create-single-tenant-workflows-visual-studio-code/open-definition-file-in-designer.png)
-1. From the **Enable connectors in Azure** list, select **Use connectors from Azure**, which applies to all managed connectors that are available and deployed in Azure, not just connectors for Azure services.
+1. After the **Enable connectors in Azure** list opens, select **Use connectors from Azure**, which applies to all the managed or "shared" connectors, which are hosted and run in Azure versus the built-in, native, or "in-app" connectors, which run directly with the Azure Logic Apps runtime.
- ![Screenshot that shows Explorer pane with "Enable connectors in Azure" list open and "Use connectors from Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/use-connectors-from-azure.png)
+ ![Screenshot shows Explorer pane, open list named Enable connectors in Azure, and selected option to Use connectors from Azure.](./media/create-single-tenant-workflows-visual-studio-code/use-connectors-from-azure.png)
> [!NOTE]
- > Stateless workflows currently support only *actions* for [managed connectors](../connectors/managed.md),
- > which are deployed in Azure, and not triggers. Although you have the option to enable connectors in Azure for your stateless workflow,
+ > Stateless workflows currently support only *actions* from [managed connectors](../connectors/managed.md), not triggers.
+ > Although you have the option to enable connectors in Azure for your stateless workflow,
> the designer doesn't show any managed connector triggers for you to select.
-1. From the **Select subscription** list, select the Azure subscription to use for your logic app project.
+1. After the **Select subscription** list opens, select the Azure subscription to use for your logic app project.
- ![Screenshot that shows Explorer pane with the "Select subscription" box and your subscription selected.](./media/create-single-tenant-workflows-visual-studio-code/select-azure-subscription.png)
+ ![Screenshot shows Explorer pane with list named Select subscription and a selected subscription.](./media/create-single-tenant-workflows-visual-studio-code/select-azure-subscription.png)
-1. From the resource groups list, select **Create new resource group**.
+1. After the resource groups list opens, select **Create new resource group**.
- ![Screenshot that shows Explorer pane with resource groups list and "Create new resource group" selected.](./media/create-single-tenant-workflows-visual-studio-code/create-select-resource-group.png)
+ ![Screenshot shows Explorer pane with resource groups list and selected option to create new resource group.](./media/create-single-tenant-workflows-visual-studio-code/create-select-resource-group.png)
1. Provide a name for the resource group, and press Enter. This example uses **Fabrikam-Workflows-RG**.
- ![Screenshot that shows Explorer pane and resource group name box.](./media/create-single-tenant-workflows-visual-studio-code/enter-name-for-resource-group.png)
+ ![Screenshot shows Explorer pane and resource group name box.](./media/create-single-tenant-workflows-visual-studio-code/enter-name-for-resource-group.png)
1. From the locations list, select the Azure region to use when creating your resource group and resources. This example uses **West Central US**.
The authoring capability is currently available only in Visual Studio Code, but
> > If the designer won't open, review the troubleshooting section, [Designer fails to open](#designer-fails-to-open).
- After the designer appears, the **Choose an operation** prompt appears on the designer and is selected by default, which shows the **Add an action** pane.
+ After the designer appears, the **Add a trigger** prompt appears on the designer.
- ![Screenshot that shows the workflow designer.](./media/create-single-tenant-workflows-visual-studio-code/workflow-designer.png)
+1. On the designer, select **Add a trigger**, which opens the **Add a trigger** pane and a gallery showing all the connectors that have triggers for you to select.
+
+ ![Screenshot shows workflow designer, the selected prompt named Add a trigger, and the gallery for connectors with triggers.](./media/create-single-tenant-workflows-visual-studio-code/workflow-designer-triggers-overview.png)
1. Next, [add a trigger and actions](#add-trigger-actions) to your workflow.
The authoring capability is currently available only in Visual Studio Code, but
## Add a trigger and actions
-After you open the designer, the **Choose an operation** prompt appears on the designer and is selected by default. You can now start creating your workflow by adding a trigger and actions.
+After you open a blank workflow in the designer, the **Add a trigger** prompt appears on the designer. You can now start creating your workflow by adding a trigger and actions.
-The workflow in this example uses this trigger and these actions:
+> [!IMPORTANT]
+> To locally run a workflow that uses a webhook-based trigger or actions, such as the
+> [built-in HTTP Webhook trigger or action](../connectors/connectors-native-webhook.md),
+> you must enable this capability by [setting up forwarding for the webhook's callback URL](#webhook-setup).
+
+The workflow in this example uses the following trigger and actions:
-* The built-in [Request trigger](../connectors/connectors-native-reqres.md), **When an HTTP request is received**, which receives inbound calls or requests and creates an endpoint that other services or logic apps can call.
+* The [Request built-in connector trigger named **When an HTTP request is received**](../connectors/connectors-native-reqres.md), which can receive inbound calls or requests and creates an endpoint that other services or logic app workflows can call.
-* The [Office 365 Outlook action](../connectors/connectors-create-api-office365-outlook.md), **Send an email**.
+* The [Office 365 Outlook managed connector action named **Send an email**](../connectors/connectors-create-api-office365-outlook.md). To follow this how-to guide, you need an Office 365 Outlook email account. If you have an email account that's supported by a different connector, you can use that connector, but that connector's user experience will differ from the steps in this example.
-* The built-in [Response action](../connectors/connectors-native-reqres.md), which you use to send a reply and return data back to the caller.
+* The [Request built-in connector action named **Response**](../connectors/connectors-native-reqres.md), which you use to send a reply and return data back to the caller.
### Add the Request trigger
-1. Next to the designer, in the **Add a trigger** pane, under the **Choose an operation** search box, make sure that **Built-in** is selected so that you can select a trigger that runs natively.
+1. On the workflow designer, in the **Add a trigger** pane, open the **Runtime** list, and select **In-App** so that you view only the available built-in connector triggers.
-1. In the **Choose an operation** search box, enter **when a http request**, and select the built-in Request trigger that's named **When an HTTP request is received**.
+1. Find the Request trigger named **When an HTTP request is received** by using the search box, and add that trigger to your workflow. For more information, see [Build a workflow with a trigger and actions](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
- ![Screenshot that shows the workflow designer and **Add a trigger** pane with "When an HTTP request is received" trigger selected.](./media/create-single-tenant-workflows-visual-studio-code/add-request-trigger.png)
+ ![Screenshot shows workflow designer, Add a trigger pane, and selected trigger named When an HTTP request is received.](./media/create-single-tenant-workflows-visual-studio-code/add-request-trigger.png)
- When the trigger appears on the designer, the trigger's details pane opens to show the trigger's properties, settings, and other actions.
+ When the trigger appears on the designer, the trigger's information pane opens and shows the trigger's parameters, settings, and other related tasks.
- ![Screenshot that shows the workflow designer with the "When an HTTP request is received" trigger selected and trigger details pane open.](./media/create-single-tenant-workflows-visual-studio-code/request-trigger-added-to-designer.png)
+ ![Screenshot shows information pane for the trigger named When an HTTP request is received.](./media/create-single-tenant-workflows-visual-studio-code/request-trigger-added-to-designer.png)
> [!TIP]
- > If the details pane doesn't appear, makes sure that the trigger is selected on the designer.
+ > If the information pane doesn't appear, makes sure that the trigger is selected on the designer.
-1. If you need to delete an item from the designer, [follow these steps for deleting items from the designer](#delete-from-designer).
+1. Save your workflow. On the designer toolbar, select **Save**.
-### Add the Office 365 Outlook action
+If you need to delete an item from the designer, [follow these steps for deleting items from the designer](#delete-from-designer).
-1. On the designer, under the trigger that you added, select the plus sign (**+**) > **Add an action**.
+### Add the Office 365 Outlook action
- The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
+1. On the designer, under the Request trigger, select the plus sign (**+**) > **Add an action**.
-1. On the **Add an action** pane, under the **Choose an operation** search box, select **Azure** so that you can select an action for a managed connector that's deployed in Azure.
+1. In the **Add an action** pane that opens, from the **Runtime** list, select **Shared** so that you view only the available managed connector actions.
- This example selects and uses the Office 365 Outlook action, **Send an email (V2)**.
+1. Find the Office 365 Outlook managed connector action named **Send an email (V2)** by using the search box, and add that action to your workflow. For more information, see [Build a workflow with a trigger and actions](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- ![Screenshot that shows the workflow designer and **Add an action** pane with Office 365 Outlook "Send an email" action selected.](./media/create-single-tenant-workflows-visual-studio-code/add-send-email-action.png)
+ ![Screenshot shows workflow designer and Add an action pane with selected Office 365 Outlook action named Send an email.](./media/create-single-tenant-workflows-visual-studio-code/add-send-email-action.png)
-1. In the action's details pane, select **Sign in** so that you can create a connection to your email account.
+1. When the action's authentication pane opens, select **Sign in** to create a connection to your email account.
- ![Screenshot that shows the workflow designer and **Send an email (V2)** pane with "Sign in" selected.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-sign-in.png)
+ ![Screenshot shows action named Send an email (V2) with selected sign in button.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-sign-in.png)
-1. When Visual Studio Code prompts you for consent to access your email account, select **Open**.
+1. Follow the subsequent prompts to select your account, allow access, and allow returning to Visual Studio Code.
- ![Screenshot that shows the Visual Studio Code prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-open-external-website.png)
+ > [!NOTE]
+ > If too much time passes before you complete the prompts, the authentication process times out
+ > and fails. In this case, return to the designer and retry signing in to create the connection.
- > [!TIP]
- > To prevent future prompts, select **Configure Trusted Domains**
- > so that you can add the authentication page as a trusted domain.
+ 1. When the Microsoft prompt appears, select the user account for Office 365 Outlook, and then select **Allow access**.
-1. Follow the subsequent prompts to sign in, allow access, and allow returning to Visual Studio Code.
+ 1. When Azure Logic Apps prompts to open a Visual Studio Code link, select **Open**.
- > [!NOTE]
- > If too much time passes before you complete the prompts, the authentication process times out and fails.
- > In this case, return to the designer and retry signing in to create the connection.
+ ![Screenshot shows prompt to open link for Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-open-link-type.png)
-1. When the Azure Logic Apps (Standard) extension prompts you for consent to access your email account, select **Open**. Follow the subsequent prompt to allow access.
+ 1. When Visual Studio Code prompts to open the Microsoft Azure Tools, select **Open**.
- ![Screenshot that shows the extension prompt to permit access.](./media/create-single-tenant-workflows-visual-studio-code/allow-extension-open-uri.png)
+ ![Screenshot shows prompt to open Microsoft Azure tools.](./media/create-single-tenant-workflows-visual-studio-code/visual-studio-code-open-external-website.png)
> [!TIP]
- > To prevent future prompts, select **Don't ask again for this extension**.
+ > To skip such future prompts, select the following options when the associated prompts appear:
+ >
+ > - Permission to open link for Visual Studio Code: Select **Always allow logic-apis-westcentralus.consent.azure-apim.net to open links of this type in the associated app**. This domain changes based on the Azure region that you selected for your logic app resource.
+ >
+ > - Permission to open Microsoft Azure Tools: Select **Don't ask again for this extension**.
- After Visual Studio Code creates your connection, some connectors show the message that **The connection will be valid for {n} days only**. This time limit applies only to the duration while you author your logic app in Visual Studio Code. After deployment, this limit no longer applies because your logic app can authenticate at runtime by using its automatically enabled [system-assigned managed identity](../logic-apps/create-managed-service-identity.md). This managed identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this system-assigned managed identity, connections won't work at runtime.
+ After Visual Studio Code creates your connection, some connectors show the message that **The connection will be valid for {n} days only**. This time limit applies only to the duration while you author your logic app workflow in Visual Studio Code. After deployment, this limit no longer applies because your workflow can authenticate at runtime by using its automatically enabled [system-assigned managed identity](create-managed-service-identity.md). This managed identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this system-assigned managed identity, connections won't work at runtime.
1. On the designer, if the **Send an email** action doesn't appear selected, select that action.
-1. On the action's details pane, on the **Parameters** tab, provide the required information for the action, for example:
+1. On the action information pane, on the **Parameters** tab, provide the required information for the action, for example:
- ![Screenshot that shows the workflow designer with details for Office 365 Outlook "Send an email" action.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-details.png)
+ ![Screenshot shows information for the Office 365 Outlook action named Send an email.](./media/create-single-tenant-workflows-visual-studio-code/send-email-action-details.png)
| Property | Required | Value | Description | |-|-|-|-|
- | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, **sophiaowen@fabrikam.com**. |
+ | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, **sophia.owen@fabrikam.com**. |
| **Subject** | Yes | **An email from your example workflow** | The email subject | | **Body** | Yes | **Hello from your example workflow!** | The email body content |
- ||||
> [!NOTE]
- > If you want to make any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tab,
- > make sure that you select **Done** to commit those changes before you switch tabs or change focus to the designer.
+ > If you make any changes on the **Testing** tab, make sure that you select **Save**
+ > to commit those changes before you switch tabs or change focus to the designer.
> Otherwise, Visual Studio Code won't keep your changes.
-1. On the designer, select **Save**.
-
-> [!IMPORTANT]
-> To locally run a workflow that uses a webhook-based trigger or actions, such as the
-> [built-in HTTP Webhook trigger or action](../connectors/connectors-native-webhook.md),
-> you must enable this capability by [setting up forwarding for the webhook's callback URL](#webhook-setup).
+1. Save your workflow. On the designer, select **Save**.
<a name="webhook-setup"></a>
To test your logic app, follow these steps to start a debugging session, and fin
1. To debug a stateless workflow more easily, you can [enable the run history for that workflow](#enable-run-history-stateless).
+1. Make sure that your Azurite emulator is running. For more information, review [Storage requirements](#storage-requirements).
+ 1. On the Visual Studio Code Activity Bar, open the **Run** menu, and select **Start Debugging** (F5). The **Terminal** window opens so that you can review the debugging session.
To test your logic app, follow these steps to start a debugging session, and fin
1. From the **workflow.json** file's shortcut menu, select **Overview**.
- ![Screenshot that shows the Explorer pane and shortcut window for the workflow.json file with "Overview" selected.](./media/create-single-tenant-workflows-visual-studio-code/open-workflow-overview.png)
+ ![Screenshot shows Explorer pane, workflow.json file's shortcut menu with selected option, Overview.](./media/create-single-tenant-workflows-visual-studio-code/open-workflow-overview.png)
1. Find the **Callback URL** value, which looks similar to this URL for the example Request trigger: `http://localhost:7071/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
- ![Screenshot that shows your workflow's overview page with callback URL](./media/create-single-tenant-workflows-visual-studio-code/find-callback-url.png)
+ ![Screenshot shows workflow overview page with callback URL.](./media/create-single-tenant-workflows-visual-studio-code/find-callback-url.png)
1. To test the callback URL by triggering the logic app workflow, open [Postman](https://www.postman.com/downloads/) or your preferred tool for creating and sending requests.
To test your logic app, follow these steps to start a debugging session, and fin
In Postman, the request pane opens so that you can send a request to the callback URL for the Request trigger.
- ![Screenshot that shows Postman with the opened request pane](./media/create-single-tenant-workflows-visual-studio-code/postman-request-pane.png)
+ ![Screenshot shows Postman with the opened request pane.](./media/create-single-tenant-workflows-visual-studio-code/postman-request-pane.png)
1. Return to Visual Studio Code. From the workflow's overview page, copy the **Callback URL** property value. 1. Return to Postman. On the request pane, next the method list, which currently shows **GET** as the default request method, paste the callback URL that you previously copied in the address box, and select **Send**.
- ![Screenshot that shows Postman and callback URL in the address box with Send button selected](./media/create-single-tenant-workflows-visual-studio-code/postman-test-call-back-url.png)
+ ![Screenshot shows Postman and callback URL in the address box with Send button selected.](./media/create-single-tenant-workflows-visual-studio-code/postman-test-call-back-url.png)
The example logic app workflow sends an email that appears similar to this example:
- ![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-result-email.png)
+ ![Screenshot shows Outlook email as described in the example.](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-result-email.png)
1. In Visual Studio Code, return to your workflow's overview page.
To test your logic app, follow these steps to start a debugging session, and fin
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
- |||
-1. To review the statuses for each step in a specific run and the step's inputs and outputs, select the ellipses (**...**) button for that run, and select **Show Run**.
+1. To review the statuses for each step in a specific run and the step's inputs and outputs, select the ellipses (**...**) button for that run, and select **Show run**.
- ![Screenshot that shows your workflow's run history row with ellipses button and "Show Run" selected](./media/create-single-tenant-workflows-visual-studio-code/show-run-history.png)
+ ![Screenshot shows workflow's run history row with selected ellipses button and Show Run.](./media/create-single-tenant-workflows-visual-studio-code/show-run-history.png)
Visual Studio Code opens the monitoring view and shows the status for each step in the run.
- ![Screenshot that shows each step in the workflow run and their status](./media/create-single-tenant-workflows-visual-studio-code/run-history-action-status.png)
+ ![Screenshot shows each step in workflow run and their status.](./media/create-single-tenant-workflows-visual-studio-code/run-history-action-status.png)
> [!NOTE] > If a run failed and a step in monitoring view shows the **400 Bad Request** error, this problem might result
To test your logic app, follow these steps to start a debugging session, and fin
| **Succeeded with retries** | The action succeeded but only after one or more retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. | | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. | | **Waiting** | Applies to a webhook action that's waiting for an inbound request from a caller. |
- |||
[aborted-icon]: ./media/create-single-tenant-workflows-visual-studio-code/aborted.png [cancelled-icon]: ./media/create-single-tenant-workflows-visual-studio-code/cancelled.png
To test your logic app, follow these steps to start a debugging session, and fin
[timed-out-icon]: ./media/create-single-tenant-workflows-visual-studio-code/timed-out.png [waiting-icon]: ./media/create-single-tenant-workflows-visual-studio-code/waiting.png
-1. To review the inputs and outputs for each step, select the step that you want to inspect.
-
- ![Screenshot that shows the status for each step in the workflow plus the inputs and outputs in the expanded "Send an email" action](./media/create-single-tenant-workflows-visual-studio-code/run-history-details.png)
+1. To review the inputs and outputs for each step, select the step that you want to inspect. To further review the raw inputs and outputs for that step, select **Show raw inputs** or **Show raw outputs**.
-1. To further review the raw inputs and outputs for that step, select **Show raw inputs** or **Show raw outputs**.
+ ![Screenshot shows status for each step in workflow plus inputs and outputs in expanded action named Send an email.](./media/create-single-tenant-workflows-visual-studio-code/run-history-details.png)
1. To stop the debugging session, on the **Run** menu, select **Stop Debugging** (Shift + F5).
To test your logic app, follow these steps to start a debugging session, and fin
## Return a response
-To return a response to the caller that sent a request to your logic app, you can use the built-in [Response action](../connectors/connectors-native-reqres.md) for a workflow that starts with the Request trigger.
+When you have a workflow that starts with the Request trigger, you can return a response to the caller that sent a request to your workflow by using the [Request built-in action named **Response**](../connectors/connectors-native-reqres.md).
-1. On the workflow designer, under the **Send an email** action, select the plus sign (**+**) > **Add an action**.
+1. In the workflow designer, under the **Send an email** action, select the plus sign (**+**) > **Add an action**.
- The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
+ The **Add an action** pane opens so that you can select the next action.
-1. On the **Add an action** pane, under the **Choose an action** search box, make sure that **Built-in** is selected. In the search box, enter **response**, and select the **Response** action.
+1. In the **Add an action** pane, from the **Runtime** list, select **In-App**. Find and add the **Response** action.
- ![Screenshot that shows the workflow designer with the Response action selected.](./media/create-single-tenant-workflows-visual-studio-code/add-response-action.png)
+ After the **Response** action appears on the designer, the action's details pane automatically opens.
- When the **Response** action appears on the designer, the action's details pane automatically opens.
-
- ![Screenshot that shows the workflow designer with the "Response" action's details pane open and the "Body" property set to the "Send an email" action's "Body" property value.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details.png)
+ ![Screenshot shows workflow designer and Response information pane.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details.png)
1. On the **Parameters** tab, provide the required information for the function that you want to call.
- This example returns the **Body** property value that's output from the **Send an email** action.
-
- 1. Click inside the **Body** property box so that the dynamic content list appears and shows the available output values from the preceding trigger and actions in the workflow.
+ This example returns the **Body** parameter value, which is the output from the **Send an email** action.
- ![Screenshot that shows the "Response" action's details pane with the mouse pointer inside the "Body" property so that the dynamic content list appears.](./media/create-single-tenant-workflows-visual-studio-code/open-dynamic-content-list.png)
+ 1. For the **Body** parameter, select inside the edit box, and select the lightning icon, which opens the dynamic content list. This list shows the available output values from the preceding trigger and actions in the workflow.
1. In the dynamic content list, under **Send an email**, select **Body**.
- ![Screenshot that shows the open dynamic content list. In the list, under the "Send an email" header, the "Body" output value is selected.](./media/create-single-tenant-workflows-visual-studio-code/select-send-email-action-body-output-value.png)
+ ![Screenshot shows open dynamic content list where under Send an email header, the Body output value is selected.](./media/create-single-tenant-workflows-visual-studio-code/select-send-email-action-body-output-value.png)
When you're done, the Response action's **Body** property is now set to the **Send an email** action's **Body** output value.
- ![Screenshot that shows the status for each step in the workflow plus the inputs and outputs in the expanded "Response" action.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details-body-property.png)
+ ![Screenshot shows workflow designer, Response information pane, and Body parameter set to Body value for the action named Send an email.](./media/create-single-tenant-workflows-visual-studio-code/response-action-details-body-property.png)
1. On the designer, select **Save**.
After you make updates to your logic app, you can run another test by rerunning
1. In Postman or your tool for creating and sending requests, send another request to trigger your workflow.
-1. If you created a stateful workflow, on the workflow's overview page, check the status for the most recent run. To view the status, inputs, and outputs for each step in that run, select the ellipses (**...**) button for that run, and select **Show Run**.
+1. If you created a stateful workflow, on the workflow's overview page, check the status for the most recent run. To view the status, inputs, and outputs for each step in that run, select the ellipses (**...**) button for that run, and select **Show run**.
For example, here's the step-by-step status for a run after the sample workflow was updated with the Response action.
- ![Screenshot that shows the status for each step in the updated workflow plus the inputs and outputs in the expanded "Response" action.](./media/create-single-tenant-workflows-visual-studio-code/run-history-details-rerun.png)
+ ![Screenshot shows status for each step in updated workflow plus inputs and outputs in expanded Response action.](./media/create-single-tenant-workflows-visual-studio-code/run-history-details-rerun.png)
1. To stop the debugging session, on the **Run** menu, select **Stop Debugging** (Shift + F5).
To find the fully qualified domain names (FQDNs) for these connections, follow t
## Deploy to Azure
-From Visual Studio Code, you can directly publish your project to Azure to deploy your Standard logic app. You can publish your logic app as a new resource, which automatically creates any necessary resources, such as an [Azure Storage account, similar to function app requirements](../azure-functions/storage-considerations.md). Or, you can publish your logic app to a previously deployed Standard logic app resource, which overwrites that logic app.
+From Visual Studio Code, you can directly publish your project to Azure to deploy your Standard logic app resource. You can publish your logic app as a new resource, which automatically creates any necessary resources, such as an [Azure Storage account, similar to function app requirements](../azure-functions/storage-considerations.md). Or, you can publish your logic app to a previously deployed Standard logic app resource, which overwrites that logic app.
Deployment for the Standard logic app resource requires a hosting plan and pricing tier, which you select during deployment. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing).
Deployment for the Standard logic app resource requires a hosting plan and prici
### Publish to a new Standard logic app resource
-1. On the Visual Studio Code Activity Bar, select the Azure icon.
+1. On the Visual Studio Code Activity Bar, select the Azure icon to open the Azure window.
-1. On the **Logic Apps (Standard)** pane toolbar, select **Deploy to Logic App**.
+1. In the **Workspace** section, on the toolbar, select **Deploy** > **Deploy to Logic App**.
- ![Screenshot that shows the pane "Logic Apps (Standard)" pane and pane's toolbar with "Deploy to Logic App" selected.](./media/create-single-tenant-workflows-visual-studio-code/deploy-to-logic-app.png)
+ ![Screenshot shows Azure window with Workspace toolbar and Deploy shortcut menu with Deploy to Logic App selected.](./media/create-single-tenant-workflows-visual-studio-code/deploy-to-logic-app.png)
1. If prompted, select the Azure subscription to use for your logic app deployment.
Deployment for the Standard logic app resource requires a hosting plan and prici
This example continues with **Create new Logic App (Standard) in Azure Advanced**.
- ![Screenshot that shows the "Logic Apps (Standard)" pane and a list with "Create new Logic App (Standard) in Azure" selected.](./media/create-single-tenant-workflows-visual-studio-code/select-create-logic-app-options.png)
+ ![Screenshot shows deployment options list and selected option, Create new Logic App (Standard) in Azure Advanced.](./media/create-single-tenant-workflows-visual-studio-code/select-create-logic-app-options.png)
1. To create your new Standard logic app resource, follow these steps: 1. Provide a globally unique name for your new logic app, which is the name to use for the **Logic App (Standard)** resource. This example uses **Fabrikam-Workflows-App**.
- ![Screenshot that shows the "Logic Apps (Standard)" pane and a prompt to provide a name for the new logic app to create.](./media/create-single-tenant-workflows-visual-studio-code/enter-logic-app-name.png)
+ ![Screenshot shows prompt to provide a name for the new logic app to create.](./media/create-single-tenant-workflows-visual-studio-code/enter-logic-app-name.png)
1. Select a hosting plan for your new logic app. Either create a name for your plan, or select an existing plan (Windows-based App Service plans only). This example selects **Create new App Service Plan**.
Deployment for the Standard logic app resource requires a hosting plan and prici
1. To review and monitor the deployment process, on the **View** menu, select **Output**. From the Output window toolbar list, select **Azure Logic Apps**.
- ![Screenshot that shows the Output window with the "Azure Logic Apps" selected in the toolbar list along with the deployment progress and statuses.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-deployment-output-window.png)
+ ![Screenshot shows Output window with Azure Logic Apps selected in the toolbar list along with the deployment progress and statuses.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-deployment-output-window.png)
When Visual Studio Code finishes deploying your logic app to Azure, the following message appears:
- ![Screenshot that shows a message that deployment to Azure successfully completed.](./media/create-single-tenant-workflows-visual-studio-code/deployment-to-azure-completed.png)
+ ![Screenshot shows a message that deployment to Azure successfully completed.](./media/create-single-tenant-workflows-visual-studio-code/deployment-to-azure-completed.png)
Congratulations, your logic app is now live in Azure and enabled by default.
You can have multiple workflows in your logic app project. To add a blank workfl
1. On the Visual Studio Code Activity Bar, select the Azure icon.
-1. In the Azure pane, next to **Logic Apps (Standard)**, select **Create Workflow** (Azure Logic Apps icon).
+1. In the Azure window, in the **Workspace** section, on the toolbar, select **Create Workflow** (Azure Logic Apps icon).
1. Select the workflow type that you want to add: **Stateful** or **Stateless**
When you're done, a new workflow folder appears in your project along with a **w
In Visual Studio Code, you can view all the deployed logic apps in your Azure subscription, whether they're Consumption or Standard logic app resources, and select tasks that help you manage those logic apps. However, to access both resource types, you need both the **Azure Logic Apps (Consumption)** and the **Azure Logic Apps (Standard)** extensions for Visual Studio Code.
-1. On the left toolbar, select the Azure icon. In the **Logic Apps (Standard)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+1. On the Visual Studio Code Activity Bar, select the Azure icon. In the **Resources**, expand your subscription, and then expand **Logic App**, which shows all the logic apps deployed in Azure for that subscription.
1. Open the logic app that you want to manage. From the logic app's shortcut menu, select the task that you want to perform.
In Visual Studio Code, you can view all the deployed logic apps in your Azure su
> For more information, review [Considerations for stopping logic apps](#considerations-stop-logic-apps) and > [Considerations for deleting logic apps](#considerations-delete-logic-apps).
- ![Screenshot that shows Visual Studio Code with the opened "Azure Logic Apps (Standard)" extension pane and the deployed workflow.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-visual-studio-code.png)
+ ![Screenshot shows Visual Studio Code with Resources section and deployed logic app resource.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-visual-studio-code.png)
1. To view all the workflows in the logic app, expand your logic app, and then expand the **Workflows** node.
In Visual Studio Code, you can view all the deployed logic apps in your Azure su
The Azure portal opens in your browser, signs you in to the portal automatically if you're signed in to Visual Studio Code, and shows your logic app.
- ![Screenshot that shows the Azure portal page for your logic app in Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
+ ![Screenshot shows Azure portal page for your logic app in Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
You can also sign in separately to the Azure portal, use the portal search box to find your logic app, and then select your logic app from the results list.
- ![Screenshot that shows the Azure portal and the search bar with search results for deployed logic app, which appears selected.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-azure-portal.png)
+ ![Screenshot shows Azure portal and search bar with search results for deployed logic app, which appears selected.](./media/create-single-tenant-workflows-visual-studio-code/find-deployed-workflow-azure-portal.png)
<a name="considerations-stop-logic-apps"></a>
Stopping a logic app affects workflow instances in the following ways:
To stop a trigger from firing on unprocessed items since the last run, clear the trigger state before you restart the logic app:
- 1. In Visual Studio Code, on the left toolbar, select the Azure icon.
- 1. In the **Logic Apps (Standard)** pane, expand your subscription, which shows all the deployed logic apps for that subscription.
+ 1. On the Visual Studio Code Activity Bar, select the Azure icon to open the Azure window.
+ 1. In the **Resources** section, expand your subscription, which shows all the deployed logic apps for that subscription.
1. Expand your logic app, and then expand the node that's named **Workflows**. 1. Open a workflow, and edit any part of that workflow's trigger. 1. Save your changes. This step resets the trigger's current state.
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
1. In the Azure portal search box, enter **logic apps**. When the results list appears, under **Services**, select **Logic apps**.
- ![Screenshot that shows the Azure portal search box with the "logic apps" search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-resource.png)
+ ![Screenshot shows Azure portal search box with logic apps as search text.](./media/create-single-tenant-workflows-visual-studio-code/portal-find-logic-app-resource.png)
1. On the **Logic apps** pane, select the logic app that you deployed from Visual Studio Code.
- ![Screenshot that shows the Azure portal and the Logic App (Standard) resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-resources-pane.png)
+ ![Screenshot shows Azure portal and Standard logic app resources deployed in Azure.](./media/create-single-tenant-workflows-visual-studio-code/logic-app-resources-pane.png)
The Azure portal opens the individual resource page for the selected logic app.
- ![Screenshot that shows your logic app workflow's resource page in the Azure portal.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
+ ![Screenshot shows Azure portal and your logic app resource page.](./media/create-single-tenant-workflows-visual-studio-code/deployed-workflow-azure-portal.png)
1. To view the workflows in this logic app, on the logic app's menu, select **Workflows**. The **Workflows** pane shows all the workflows in the current logic app. This example shows the workflow that you created in Visual Studio Code.
- ![Screenshot that shows a "Logic App (Standard)" resource page with the "Workflows" pane open and the deployed workflow](./media/create-single-tenant-workflows-visual-studio-code/deployed-logic-app-workflows-pane.png)
+ ![Screenshot shows your logic app resource page with opened Workflows pane and workflows.](./media/create-single-tenant-workflows-visual-studio-code/deployed-logic-app-workflows-pane.png)
1. To view a workflow, on the **Workflows** pane, select that workflow.
After you deploy a logic app to the Azure portal from Visual Studio Code, you ca
For example, to view the steps in the workflow, select **Designer**.
- ![Screenshot that shows the selected workflow's "Overview" pane, while the workflow menu shows the selected "Designer" command.](./media/create-single-tenant-workflows-visual-studio-code/workflow-overview-pane-select-designer.png)
+ ![Screenshot shows selected workflow's Overview pane, while the workflow menu shows the selected "Designer" command.](./media/create-single-tenant-workflows-visual-studio-code/workflow-overview-pane-select-designer.png)
The workflow designer opens and shows the workflow that you built in Visual Studio Code. You can now make changes to this workflow in the Azure portal.
- ![Screenshot that shows the workflow designer and workflow deployed from Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/opened-workflow-designer.png)
+ ![Screenshot shows workflow designer and workflow deployed from Visual Studio Code.](./media/create-single-tenant-workflows-visual-studio-code/opened-workflow-designer.png)
<a name="add-workflow-portal"></a> ## Add another workflow in the portal
-Through the Azure portal, you can add blank workflows to a **Logic App (Standard)** resource that you deployed from Visual Studio Code and build those workflows in the Azure portal.
+Through the Azure portal, you can add blank workflows to a Standard logic app resource that you deployed from Visual Studio Code and build those workflows in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com), select your deployed **Logic App (Standard)** resource.
+1. In the [Azure portal](https://portal.azure.com), select your deployed Standard logic app resource.
-1. On the logic app menu, select **Workflows**. On the **Workflows** pane, select **Add**.
+1. On the logic app resource menu, select **Workflows**. On the **Workflows** pane, select **Add**.
- ![Screenshot that shows the selected logic app's "Workflows" pane and toolbar with "Add" command selected.](./media/create-single-tenant-workflows-visual-studio-code/add-new-workflow.png)
+ ![Screenshot shows selected logic app's Workflows pane and toolbar with Add command selected.](./media/create-single-tenant-workflows-visual-studio-code/add-new-workflow.png)
1. In the **New workflow** pane, provide name for the workflow. Select either **Stateful** or **Stateless** **>** **Create**. After Azure deploys your new workflow, which appears on the **Workflows** pane, select that workflow so that you can manage and perform other tasks, such as opening the designer or code view.
- ![Screenshot that shows the selected workflow with management and review options.](./media/create-single-tenant-workflows-visual-studio-code/view-new-workflow.png)
+ ![Screenshot shows selected workflow with management and review options.](./media/create-single-tenant-workflows-visual-studio-code/view-new-workflow.png)
For example, opening the designer for a new workflow shows a blank canvas. You can now build this workflow in the Azure portal.
- ![Screenshot that shows the workflow designer and a blank workflow.](./media/create-single-tenant-workflows-visual-studio-code/opened-blank-workflow-designer.png)
+ ![Screenshot shows workflow designer and blank workflow.](./media/create-single-tenant-workflows-visual-studio-code/opened-blank-workflow-designer.png)
<a name="enable-run-history-stateless"></a>
To debug a stateless workflow more easily, you can enable the run history for th
After you deploy a **Logic App (Standard)** resource from Visual Studio Code to Azure, you can review any available run history and details for a workflow in that resource by using the Azure portal and the **Monitor** experience for that workflow. However, you first have to enable the **Monitor** view capability on that logic app resource.
-1. In the [Azure portal](https://portal.azure.com), select the deployed **Logic App (Standard)** resource.
+1. In the [Azure portal](https://portal.azure.com), open the Standard logic app resource.
-1. On that resource's menu, under **API**, select **CORS**.
+1. On the logic app resource menu, under **API**, select **CORS**.
1. On the **CORS** pane, under **Allowed Origins**, add the wildcard character (*). 1. When you're done, on the **CORS** toolbar, select **Save**.
- ![Screenshot that shows the Azure portal with a deployed Logic App (Standard) resource. On the resource menu, "CORS" is selected with a new entry for "Allowed Origins" set to the wildcard "*" character.](./media/create-single-tenant-workflows-visual-studio-code/enable-run-history-deployed-logic-app.png)
+ ![Screenshot shows Azure portal with deployed Standard logic app resource. On the resource menu, CORS is selected with a new entry for Allowed Origins set to the wildcard * character.](./media/create-single-tenant-workflows-visual-studio-code/enable-run-history-deployed-logic-app.png)
<a name="enable-open-application-insights"></a>
To delete an item in your workflow from the designer, follow any of these steps:
* Select the item so that details pane opens for that item. In the pane's upper right corner, open the ellipses (**...**) menu, and select **Delete**. To confirm, select **OK**.
- ![Screenshot that shows a selected item on designer with the opened details pane plus the selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-visual-studio-code/delete-item-from-designer.png)
+ ![Screenshot shows a selected item on designer with opened information pane plus selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-visual-studio-code/delete-item-from-designer.png)
> [!TIP] > If the ellipses menu isn't visible, expand Visual Studio Code window wide enough so that
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 01/05/2023 Last updated : 05/23/2023
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Jobs.BackgroundJobs.NumPartitionsInJobDefinitionsTable` | `4` job partitions | Sets the number of job partitions in the job definition table. This value controls how much execution throughput is affected by partition storage limits. | | `Jobs.BackgroundJobs.NumPartitionsInJobTriggersQueue` | `1` job queue | Sets the number of job queues monitored by job dispatchers for jobs to process. This value also affects the number of storage partitions where job queues exist. | | `Jobs.BackgroundJobs.NumWorkersPerProcessorCount` | `192` dispatcher worker instances | Sets the number of *dispatcher worker instances* or *job dispatchers* to have per processor core. This value affects the number of workflow runs per core. |
-| `Jobs.StuckJobThreshold` | `00:60:00` <br>(60 minutes) | Sets the time duration before a job is declared as stuck. If you have an action that requires more than 60 minutes to run, you might need to increase this setting's default value and also the [`functionTimeout` property](../azure-functions/functions-scale.md#timeout) value in the same **host.json** file to the same value. |
<a name="recurrence-triggers"></a>
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
ms.suite: integration
Previously updated : 02/07/2023 Last updated : 05/24/2023 # What is Azure Logic Apps?
When you create an ISE, Azure *injects* or deploys that ISE into your Azure virt
## How logic apps work
-In a logic app, each workflow always starts with a single [trigger](#logic-app-concepts). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. After the trigger fires, one or more [actions](#logic-app-concepts) run operations that process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step.
+In a logic app, each workflow always starts with a single [trigger](#logic-app-concepts). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. After the trigger fires, one or more [actions](#logic-app-concepts) run operations that process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step. Azure Logic Apps implements and uses the "at-least-once" message delivery semantic. Rarely does the service deliver a message more than one time, but no messages are lost. If your business doesn't or can't handle duplicate messages, you need to implement idempotence, so that repeating the same exact operation doesn't change the result after the first execution.
The following screenshot shows part of an example enterprise workflow. This workflow uses conditions and switches to determine the next action. Let's say you have an order system, and your workflow processes incoming orders. You want to review orders above a certain cost manually. Your workflow already has previous steps that determine how much an incoming order costs. So, you create an initial condition based on that cost value. For example:
logic-apps Monitor Workflows Collect Diagnostic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-workflows-collect-diagnostic-data.md
Previously updated : 02/16/2023 Last updated : 05/25/2023 # As a developer, I want to collect and send diagnostics data for my logic app workflows to specific destinations, such as a Log Analytics workspace, storage account, or event hub, for further review.
This how-to guide shows how to complete the following tasks, based on whether yo
1. [Include custom properties in telemetry](#custom-tracking-properties).
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
1. [Add a diagnostic setting to enable data collection](#add-diagnostic-setting).
If you turned on Log Analytics when you created your logic app resource, skip th
:::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/workspace-summary-pane-logic-apps-management.png" alt-text="Screenshot showing the Azure portal, the workspace summary pane with Logic Apps Management solution.":::
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
For a Standard logic app, you can continue with [Add a diagnostic setting](#add-diagnostic-setting). No other prerequisite steps are necessary to enable Log Analytics, nor does the Logic Apps Management solution apply to Standard logic apps.
For a Standard logic app, you can continue with [Add a diagnostic setting](#add-
1. To finish adding your diagnostic setting, select **Save**.
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
After your workflow runs, you can view the data about those runs in your Log Ana
:::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/logic-app-action-details.png" alt-text="Screenshot showing all operations and details for a specific logic app workflow run.":::
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
1. In the [Azure portal](https://portal.azure.com), open your Log Analytics workspace.
If you don't specify this custom tracking ID, Azure automatically generates this
:::image type="content" source="media/monitor-workflows-collect-diagnostic-data/consumption/custom-tracking-id.png" alt-text="Screenshot showing Azure portal, designer for Consumption workflow, and Request trigger with custom tracking ID.":::
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
:::image type="content" source="media/monitor-workflows-collect-diagnostic-data/standard/custom-tracking-id.png" alt-text="Screenshot showing Azure portal, designer for Standard workflow, and Request trigger with custom tracking ID.":::
Actions have a **Tracked Properties** section where you can specify a custom pro
:::image type="content" source="media/monitor-workflows-collect-diagnostic-data/consumption/tracked-properties.png" alt-text="Screenshot showing Azure portal, designer for Consumption workflow, and HTTP action with tracked properties.":::
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
:::image type="content" source="media/monitor-workflows-collect-diagnostic-data/standard/tracked-properties.png" alt-text="Screenshot showing Azure portal, designer for Standard workflow, and HTTP action with tracked properties.":::
Actions have a **Tracked Properties** section where you can specify a custom pro
Tracked properties can track only a single action's inputs and outputs, but you can use the `correlation` properties of events to correlate across actions in a workflow run.
+Tracked properties can only reference the parameters, inputs, and outputs for its own trigger or action.
+
+Tracked properties aren't allowed on a trigger or action that has secure inputs, secure outputs, or both. They're also not allowed to reference another trigger or action that has secure inputs, secure outputs, or both.
+ The following examples shows where custom properties appear in your Log Analytics workspace: ### [Consumption](#tab/consumption)
The following examples shows where custom properties appear in your Log Analytic
:::image type="content" source="./media/monitor-workflows-collect-diagnostic-data/consumption/example-tracked-properties.png" alt-text="Screenshot showing example tracked properties for a specific Consumption workflow.":::
-### [Standard (preview)](#tab/standard)
+### [Standard](#tab/standard)
The custom tracking ID appears in the **ClientTrackingId** column and tracked properties appear in the **TrackedProperties** column, for example:
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
To get the key or token, use [az ml online-endpoint get-credentials](/cli/azure/
__Keys__ will be returned in the `primaryKey` and `secondaryKey` fields. The following example shows how to use the `--query` parameter to return only the primary key: ```azurecli
-ENDPOINT_CRED=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME -o tsv --query primaryKey)
+ENDPOINT_CRED=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME -g $RESOURCE_GROUP -w $WORKSPACE_NAME -o tsv --query primaryKey)
``` __Tokens__ will be returned in the `accessToken` field: ```azurecli
-ENDPOINT_CRED=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME -o tsv --query accessToken)
+ENDPOINT_CRED=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME -g $RESOURCE_GROUP -w $WORKSPACE_NAME -o tsv --query accessToken)
``` Additionally, the `expiryTimeUtc` and `refreshAfterTimeUtc` fields contain the token expiration and refresh times.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Previously updated : 04/18/2023 Last updated : 05/25/2023 # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
This YAML script creates an Azure SQL DB connection. Be sure to update the appro
# my_sqldb_connection.yaml $schema: http://azureml/sdk-2-0/Connection.json
-type: azure_sql_db
+type: azuresqldb
name: my_sqldb_connection target: Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30 # add the sql servername, port addresss and database credentials:
- type: username_password
+ type: sql_auth
username: <username> # add the sql database user name here or leave this blank and type in CLI command line password: <password> # add the sql database password here or leave this blank and type in CLI command line ```
ml_client.connections.create_or_update(workspace_connection=wps_connection)
## Next steps - [Import data assets](how-to-import-data-assets.md)
+- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
Previously updated : 05/17/2023 Last updated : 05/25/2023
ml_client.data.show_materialization_status(name="<name>")
## Next steps
+- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job) - [Working with tables in Azure Machine Learning](how-to-mltable.md) - [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
You can configure a managed VNet using either the `az ml workspace create` or `a
* __Update an existing workspace__: > [!WARNING]
- > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, serverless, serverless spark, and managed online endpoints.
+ > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints.
The following example updates an existing workspace. The `--managed-network allow_internet_outbound` parameter configures a managed VNet for the workspace:
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
> This example is for a managed VNet configured to allow internet traffic. If you want to allow only approved outbound traffic, set `isolation_mode: allow_only_approved_outbound` instead. ```yml
- type: workspace
name: myworkspace managed_network: isolation_mode: allow_internet_outbound outbound_rules: - name: added-perule
- destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
- spark_enabled: true
- subresource_target: blob
- type: private_endpoint
+ destination:
+ service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ spark_enabled: true
+ subresource_target: blob
+ type: private_endpoint
``` You can use a YAML configuration file with the `az ml workspace update` command by specifying the `--file` parameter and the name of the YAML file. For example, the following command updates an existing workspace using a YAML file named `workspace_pe.yml`:
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Foundation Models are machine learning models that have been pre-trained on vast
## How to access Foundation Models in Azure Machine Learning
-The 'Model catalog' (preview) in Azure Machine Learning Studio is a hub for discovering Foundation Models. The Open Source Models catalog is a repository of the most popular open source Foundation Models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source Foundation Models in the [Model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection.
+The 'Model catalog' (preview) in Azure Machine Learning Studio is a hub for discovering Foundation Models. The Open Source Models collection is a repository of the most popular open source Foundation Models curated by Azure Machine Learning. These models are packaged for out of the box usage and are optimized for use in Azure Machine Learning. Currently, it includes the top open source large language models, with support for other tasks coming soon. You can view the complete list of supported open source Foundation Models in the [Model catalog](https://ml.azure.com/model/catalog), under the `Open Source Models` collection.
:::image type="content" source="./media/how-to-use-foundation-models/model-catalog.png" lightbox="./media/how-to-use-foundation-models/model-catalog.png" alt-text="Screenshot showing the model catalog section in Azure Machine Learning studio." :::
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
Last updated 05/09/2023
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is a compute resource that you don't need to manage. It's created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up.
+You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a compute resource that you don't need to manage. It's created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up.
[!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
Machine learning professionals can specify the resources the job needs. Azure Ma
Enterprises can also reduce costs by specifying optimal resources for each job. IT Admins can still apply control by specifying cores quota at subscription and workspace level and apply Azure policies.
-Serverless compute can be used to run command, sweep, AutoML, pipeline, distributed training, and interactive jobs from Azure Machine Learning studio, SDK and CLI. Serverless jobs consume the same quota as Azure Machine Learning compute quota. You can choose standard (dedicated) tier or spot (low-priority) VMs. Managed identity and user identity are supported for serverless jobs.
+Serverless compute can be used to run command, sweep, AutoML, pipeline, distributed training, and interactive jobs from Azure Machine Learning studio, SDK and CLI. Serverless jobs consume the same quota as Azure Machine Learning compute quota. You can choose standard (dedicated) tier or spot (low-priority) VMs. Managed identity and user identity are supported for serverless jobs. Billing model is the same as Azure Machine Learning compute.
## Advantages of serverless compute * You don't need to create, setup, and manage compute anymore to run training jobs thus reducing steps involved to run a job.
-* You don't need to learn about various compute concepts and related properties.
+* You don't need to learn about various compute types and related properties.
* There's no need to repeatedly create clusters for each VM size needed, using same settings, and replicating for each workspace. * You can optimize costs by specifying the exact resources each job needs at runtime in terms of instance type (VM size) and instance count. You can monitor the utilization metrics of the job to optimize the resources a job would need. * To further simplify job submission, you can skip the resources altogether. Azure Machine Learning defaults the instance count and chooses an instance type (VM size) based on factors like quota, cost, performance and disk size.
+* Lesser wait times before job starts executing in some cases
* User identity and workspace user assigned managed identity is supported for job submission.
-* Managed network isolation is supported.
+* With managed network isolation you can streamline and automate your network isolation configuration
## How to use serverless compute
machine-learning Reference Yaml Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule-data-import.md
+
+ Title: 'CLI (v2) schedule YAML schema for data import (preview)'
+
+description: Reference documentation for the CLI (v2) data import schedule YAML schema.
+++++++ Last updated : 05/25/2023+++
+# CLI (v2) import schedule YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/schedule.schema.json.
++
+## YAML syntax
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `$schema` | string | The YAML schema. | |
+| `name` | string | **Required.** Name of the schedule. | |
+| `version` | string | Version of the schedule. If omitted, Azure Machine Learning autogenerates a version. | |
+| `description` | string | Description of the schedule. | |
+| `tags` | object | Dictionary of tags for the schedule. | |
+| `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | |
+| `import_data` | object or string | **Required.** The definition of the import data action that a schedule has triggered. **One of `string` or `ImportDataDefinition` is required.**| |
+
+### Trigger configuration
+
+#### Recurrence trigger
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `type` | string | **Required.** Specifies the schedule type. |recurrence|
+|`frequency`| string | **Required.** Specifies the unit of time that describes how often the schedule fires.|`minute`, `hour`, `day`, `week`, `month`|
+|`interval`| integer | **Required.** Specifies the interval at which the schedule fires.| |
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly, and the future jobs trigger based on the schedule, saying start_time will match the job created time. If the start time is in the past, the first job runs at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule runs until it's explicitly disabled.|
+|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)|
+|`pattern`|object|Specifies the pattern of the recurrence. If pattern is omitted, the job(s) is triggered according to the logic of start_time, frequency and interval.| |
+
+#### Recurrence schedule
+
+Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`.
+
+- When frequency is `day`, pattern can specify `hours` and `minutes`.
+- When frequency is `week` and `month`, pattern can specify `hours`, `minutes` and `weekdays`.
+
+| Key | Type | Allowed values |
+| | - | -- |
+|`hours`|integer or array of integer|`0-23`|
+|`minutes`|integer or array of integer|`0-59`|
+|`week_days`|string or array of string|`monday`, `tuesday`, `wednesday`, `thursday`, `friday`, `saturday`, `sunday`|
+
+#### CronTrigger
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `type` | string | **Required.** Specifies the schedule type. |cron|
+| `expression` | string | **Required.** Specifies the cron expression to define how to trigger jobs. expression uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:`MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`||
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs trigger based on the schedule, saying start_time will match the job created time. If the start time is in the past, the first job runs at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule continues to run until it's explicitly disabled.|
+|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)|
+
+### Import data definition (preview)
++
+Customer can directly use `import_data: ./<data_import>.yaml` or can use the following properties to define the data import definition.
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+|`type`| string | **Required.** Specifies the data asset type that you want to import the data as. It can be mltable when importing from a Database source, or uri_folder when importing from a FileSource.|`mltable`, `uri_folder`|
+| `name` | string | **Required.** Data asset name to register the imported data under. | |
+| `path` | string | **Required.** The path to the datastore that takes in the imported data, specified in one of two ways: <br><br> - **Required.** A URI of datastore path. Only supported URI type is `azureml`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). To avoid an over-write, a unique path for each import is recommended. To do this, parameterize the path as shown in this example - `azureml://datastores/<datastore_name>/paths/<source_name>/${{name}}`. The "datastore_name" in the example can be a datastore that you have created or can be workspaceblobstore. Alternately a "managed datastore" can be selected by referencing as shown: `azureml://datastores/workspacemanagedstore`, where the system automatically assigns a unique path. | Azure Machine Learning://<>|
+| `source` | object | External source details of the imported data source. See [Attributes of the `source`](#attributes-of-source-preview) for the set of source properties. | |
+
+### Attributes of `source` (preview)
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | The type of external source from where you intend to import data from. Only the following types are allowed at the moment - `Database` or `FileSystem`| `Database`, `FileSystem` | |
+| `query` | string | Define this value only when the `type` defined above is `database` The query in the external source of type `Database` that defines or filters data that needs to be imported.| | |
+| `path` | string | Define this value only when the `type` defined above is `FileSystem` The folder path of the folder in the external source of type `FileSystem` where the file(s) or data that needs to be imported resides.| | |
+| `connection` | string | **Required.** The connection property for the external source referenced in the format of `azureml:<connection_name>` | | |
++
+## Remarks
+
+The `az ml schedule` command can be used for managing Azure Machine Learning models.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/blob/main/cli/assets/dat). A couple are shown below.
+
+## YAML: Schedule for a data import with recurrence pattern
++
+## YAML: Schedule for data import with recurrence pattern (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: simple_recurrence_import_schedule
+display_name: Simple recurrence import schedule
+description: a simple hourly recurrence import schedule
+
+trigger:
+ type: recurrence
+ frequency: day #can be minute, hour, day, week, month
+ interval: 1 #every day
+ schedule:
+ hours: [4,5,10,11,12]
+ minutes: [0,30]
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data: ./my-snowflake-import-data.yaml
+
+```
+## YAML: Schedule for data import definition inline with recurrence pattern on managed datastore (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: inline_recurrence_import_schedule
+display_name: Inline recurrence import schedule
+description: an inline hourly recurrence import schedule
+
+trigger:
+ type: recurrence
+ frequency: day #can be minute, hour, day, week, month
+ interval: 1 #every day
+ schedule:
+ hours: [4,5,10,11,12]
+ minutes: [0,30]
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data:
+ type: mltable
+ name: my_snowflake_ds
+ path: azureml://datastores/workspacemanagedstore
+ source:
+ type: database
+ query: select * from TPCH_SF1.REGION
+ connection: azureml:my_snowflake_connection
+
+```
+
+## YAML: Schedule for a data import with cron expression
++
+## YAML: Schedule for data import with cron expression (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: simple_cron_import_schedule
+display_name: Simple cron import schedule
+description: a simple hourly cron import schedule
+
+trigger:
+ type: cron
+ expression: "0 * * * *"
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data: ./my-snowflake-import-data.yaml
+```
+
+## YAML: Schedule for data import definition inline with cron expression (preview)
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
+name: inline_cron_import_schedule
+display_name: Inline cron import schedule
+description: an inline hourly cron import schedule
+
+trigger:
+ type: cron
+ expression: "0 * * * *"
+ start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
+ time_zone: "Pacific Standard Time" # optional - default will be UTC
+
+import_data:
+ type: mltable
+ name: my_snowflake_ds
+ path: azureml://datastores/workspaceblobstore/paths/snowflake/${{name}}
+ source:
+ type: database
+ query: select * from TPCH_SF1.REGION
+ connection: azureml:my_snowflake_connection
+```
+## Appendix
+
+### Timezone
+
+The current schedule supports the timezones in this table. The key can be used directly in the Python SDK, while the value can be used in the data import YAML. The table is sorted by UTC (Coordinated Universal Time).
+
+| UTC | Key | Value |
+|-||--|
+| UTC -12:00 | DATELINE_STANDARD_TIME | "Dateline Standard Time" |
+| UTC -11:00 | UTC_11 | "UTC-11" |
+| UTC - 10:00 | ALEUTIAN_STANDARD_TIME | Aleutian Standard Time |
+| UTC - 10:00 | HAWAIIAN_STANDARD_TIME | "Hawaiian Standard Time" |
+| UTC -09:30 | MARQUESAS_STANDARD_TIME | "Marquesas Standard Time" |
+| UTC -09:00 | ALASKAN_STANDARD_TIME | "Alaskan Standard Time" |
+| UTC -09:00 | UTC_09 | "UTC-09" |
+| UTC -08:00 | PACIFIC_STANDARD_TIME_MEXICO | "Pacific Standard Time (Mexico)" |
+| UTC -08:00 | UTC_08 | "UTC-08" |
+| UTC -08:00 | PACIFIC_STANDARD_TIME | "Pacific Standard Time" |
+| UTC -07:00 | US_MOUNTAIN_STANDARD_TIME | "US Mountain Standard Time" |
+| UTC -07:00 | MOUNTAIN_STANDARD_TIME_MEXICO | "Mountain Standard Time (Mexico)" |
+| UTC -07:00 | MOUNTAIN_STANDARD_TIME | "Mountain Standard Time" |
+| UTC -06:00 | CENTRAL_AMERICA_STANDARD_TIME | "Central America Standard Time" |
+| UTC -06:00 | CENTRAL_STANDARD_TIME | "Central Standard Time" |
+| UTC -06:00 | EASTER_ISLAND_STANDARD_TIME | "Easter Island Standard Time" |
+| UTC -06:00 | CENTRAL_STANDARD_TIME_MEXICO | "Central Standard Time (Mexico)" |
+| UTC -06:00 | CANADA_CENTRAL_STANDARD_TIME | "Canada Central Standard Time" |
+| UTC -05:00 | SA_PACIFIC_STANDARD_TIME | "SA Pacific Standard Time" |
+| UTC -05:00 | EASTERN_STANDARD_TIME_MEXICO | "Eastern Standard Time (Mexico)" |
+| UTC -05:00 | EASTERN_STANDARD_TIME | "Eastern Standard Time" |
+| UTC -05:00 | HAITI_STANDARD_TIME | "Haiti Standard Time" |
+| UTC -05:00 | CUBA_STANDARD_TIME | "Cuba Standard Time" |
+| UTC -05:00 | US_EASTERN_STANDARD_TIME | "US Eastern Standard Time" |
+| UTC -05:00 | TURKS_AND_CAICOS_STANDARD_TIME | "Turks And Caicos Standard Time" |
+| UTC -04:00 | PARAGUAY_STANDARD_TIME | "Paraguay Standard Time" |
+| UTC -04:00 | ATLANTIC_STANDARD_TIME | "Atlantic Standard Time" |
+| UTC -04:00 | VENEZUELA_STANDARD_TIME | "Venezuela Standard Time" |
+| UTC -04:00 | CENTRAL_BRAZILIAN_STANDARD_TIME | "Central Brazilian Standard Time" |
+| UTC -04:00 | SA_WESTERN_STANDARD_TIME | "SA Western Standard Time" |
+| UTC -04:00 | PACIFIC_SA_STANDARD_TIME | "Pacific SA Standard Time" |
+| UTC -03:30 | NEWFOUNDLAND_STANDARD_TIME | "Newfoundland Standard Time" |
+| UTC -03:00 | TOCANTINS_STANDARD_TIME | "Tocantins Standard Time" |
+| UTC -03:00 | E_SOUTH_AMERICAN_STANDARD_TIME | "E. South America Standard Time" |
+| UTC -03:00 | SA_EASTERN_STANDARD_TIME | "SA Eastern Standard Time" |
+| UTC -03:00 | ARGENTINA_STANDARD_TIME | "Argentina Standard Time" |
+| UTC -03:00 | GREENLAND_STANDARD_TIME | "Greenland Standard Time" |
+| UTC -03:00 | MONTEVIDEO_STANDARD_TIME | "Montevideo Standard Time" |
+| UTC -03:00 | SAINT_PIERRE_STANDARD_TIME | "Saint Pierre Standard Time" |
+| UTC -03:00 | BAHIA_STANDARD_TIM | "Bahia Standard Time" |
+| UTC -02:00 | UTC_02 | "UTC-02" |
+| UTC -02:00 | MID_ATLANTIC_STANDARD_TIME | "Mid-Atlantic Standard Time" |
+| UTC -01:00 | AZORES_STANDARD_TIME | "Azores Standard Time" |
+| UTC -01:00 | CAPE_VERDE_STANDARD_TIME | "Cape Verde Standard Time" |
+| UTC | UTC | UTC |
+| UTC +00:00 | GMT_STANDARD_TIME | "GMT Standard Time" |
+| UTC +00:00 | GREENWICH_STANDARD_TIME | "Greenwich Standard Time" |
+| UTC +01:00 | MOROCCO_STANDARD_TIME | "Morocco Standard Time" |
+| UTC +01:00 | W_EUROPE_STANDARD_TIME | "W. Europe Standard Time" |
+| UTC +01:00 | CENTRAL_EUROPE_STANDARD_TIME | "Central Europe Standard Time" |
+| UTC +01:00 | ROMANCE_STANDARD_TIME | "Romance Standard Time" |
+| UTC +01:00 | CENTRAL_EUROPEAN_STANDARD_TIME | "Central European Standard Time" |
+| UTC +01:00 | W_CENTRAL_AFRICA_STANDARD_TIME | "W. Central Africa Standard Time" |
+| UTC +02:00 | NAMIBIA_STANDARD_TIME | "Namibia Standard Time" |
+| UTC +02:00 | JORDAN_STANDARD_TIME | "Jordan Standard Time" |
+| UTC +02:00 | GTB_STANDARD_TIME | "GTB Standard Time" |
+| UTC +02:00 | MIDDLE_EAST_STANDARD_TIME | "Middle East Standard Time" |
+| UTC +02:00 | EGYPT_STANDARD_TIME | "Egypt Standard Time" |
+| UTC +02:00 | E_EUROPE_STANDARD_TIME | "E. Europe Standard Time" |
+| UTC +02:00 | SYRIA_STANDARD_TIME | "Syria Standard Time" |
+| UTC +02:00 | WEST_BANK_STANDARD_TIME | "West Bank Standard Time" |
+| UTC +02:00 | SOUTH_AFRICA_STANDARD_TIME | "South Africa Standard Time" |
+| UTC +02:00 | FLE_STANDARD_TIME | "FLE Standard Time" |
+| UTC +02:00 | ISRAEL_STANDARD_TIME | "Israel Standard Time" |
+| UTC +02:00 | KALININGRAD_STANDARD_TIME | "Kaliningrad Standard Time" |
+| UTC +02:00 | LIBYA_STANDARD_TIME | "Libya Standard Time" |
+| UTC +03:00 | TÜRKIYE_STANDARD_TIME | "Türkiye Standard Time" |
+| UTC +03:00 | ARABIC_STANDARD_TIME | "Arabic Standard Time" |
+| UTC +03:00 | ARAB_STANDARD_TIME | "Arab Standard Time" |
+| UTC +03:00 | BELARUS_STANDARD_TIME | "Belarus Standard Time" |
+| UTC +03:00 | RUSSIAN_STANDARD_TIME | "Russian Standard Time" |
+| UTC +03:00 | E_AFRICA_STANDARD_TIME | "E. Africa Standard Time" |
+| UTC +03:30 | IRAN_STANDARD_TIME | "Iran Standard Time" |
+| UTC +04:00 | ARABIAN_STANDARD_TIME | "Arabian Standard Time" |
+| UTC +04:00 | ASTRAKHAN_STANDARD_TIME | "Astrakhan Standard Time" |
+| UTC +04:00 | AZERBAIJAN_STANDARD_TIME | "Azerbaijan Standard Time" |
+| UTC +04:00 | RUSSIA_TIME_ZONE_3 | "Russia Time Zone 3" |
+| UTC +04:00 | MAURITIUS_STANDARD_TIME | "Mauritius Standard Time" |
+| UTC +04:00 | GEORGIAN_STANDARD_TIME | "Georgian Standard Time" |
+| UTC +04:00 | CAUCASUS_STANDARD_TIME | "Caucasus Standard Time" |
+| UTC +04:30 | AFGHANISTAN_STANDARD_TIME | "Afghanistan Standard Time" |
+| UTC +05:00 | WEST_ASIA_STANDARD_TIME | "West Asia Standard Time" |
+| UTC +05:00 | EKATERINBURG_STANDARD_TIME | "Ekaterinburg Standard Time" |
+| UTC +05:00 | PAKISTAN_STANDARD_TIME | "Pakistan Standard Time" |
+| UTC +05:30 | INDIA_STANDARD_TIME | "India Standard Time" |
+| UTC +05:30 | SRI_LANKA_STANDARD_TIME | "Sri Lanka Standard Time" |
+| UTC +05:45 | NEPAL_STANDARD_TIME | "Nepal Standard Time" |
+| UTC +06:00 | CENTRAL_ASIA_STANDARD_TIME | "Central Asia Standard Time" |
+| UTC +06:00 | BANGLADESH_STANDARD_TIME | "Bangladesh Standard Time" |
+| UTC +06:30 | MYANMAR_STANDARD_TIME | "Myanmar Standard Time" |
+| UTC +07:00 | N_CENTRAL_ASIA_STANDARD_TIME | "N. Central Asia Standard Time" |
+| UTC +07:00 | SE_ASIA_STANDARD_TIME | "SE Asia Standard Time" |
+| UTC +07:00 | ALTAI_STANDARD_TIME | "Altai Standard Time" |
+| UTC +07:00 | W_MONGOLIA_STANDARD_TIME | "W. Mongolia Standard Time" |
+| UTC +07:00 | NORTH_ASIA_STANDARD_TIME | "North Asia Standard Time" |
+| UTC +07:00 | TOMSK_STANDARD_TIME | "Tomsk Standard Time" |
+| UTC +08:00 | CHINA_STANDARD_TIME | "China Standard Time" |
+| UTC +08:00 | NORTH_ASIA_EAST_STANDARD_TIME | "North Asia East Standard Time" |
+| UTC +08:00 | SINGAPORE_STANDARD_TIME | "Singapore Standard Time" |
+| UTC +08:00 | W_AUSTRALIA_STANDARD_TIME | "W. Australia Standard Time" |
+| UTC +08:00 | TAIPEI_STANDARD_TIME | "Taipei Standard Time" |
+| UTC +08:00 | ULAANBAATAR_STANDARD_TIME | "Ulaanbaatar Standard Time" |
+| UTC +08:45 | AUS_CENTRAL_W_STANDARD_TIME | "Aus Central W. Standard Time" |
+| UTC +09:00 | NORTH_KOREA_STANDARD_TIME | "North Korea Standard Time" |
+| UTC +09:00 | TRANSBAIKAL_STANDARD_TIME | "Transbaikal Standard Time" |
+| UTC +09:00 | TOKYO_STANDARD_TIME | "Tokyo Standard Time" |
+| UTC +09:00 | KOREA_STANDARD_TIME | "Korea Standard Time" |
+| UTC +09:00 | YAKUTSK_STANDARD_TIME | "Yakutsk Standard Time" |
+| UTC +09:30 | CEN_AUSTRALIA_STANDARD_TIME | "Cen. Australia Standard Time" |
+| UTC +09:30 | AUS_CENTRAL_STANDARD_TIME | "AUS Central Standard Time" |
+| UTC +10:00 | E_AUSTRALIAN_STANDARD_TIME | "E. Australia Standard Time" |
+| UTC +10:00 | AUS_EASTERN_STANDARD_TIME | "AUS Eastern Standard Time" |
+| UTC +10:00 | WEST_PACIFIC_STANDARD_TIME | "West Pacific Standard Time" |
+| UTC +10:00 | TASMANIA_STANDARD_TIME | "Tasmania Standard Time" |
+| UTC +10:00 | VLADIVOSTOK_STANDARD_TIME | "Vladivostok Standard Time" |
+| UTC +10:30 | LORD_HOWE_STANDARD_TIME | "Lord Howe Standard Time" |
+| UTC +11:00 | BOUGAINVILLE_STANDARD_TIME | "Bougainville Standard Time" |
+| UTC +11:00 | RUSSIA_TIME_ZONE_10 | "Russia Time Zone 10" |
+| UTC +11:00 | MAGADAN_STANDARD_TIME | "Magadan Standard Time" |
+| UTC +11:00 | NORFOLK_STANDARD_TIME | "Norfolk Standard Time" |
+| UTC +11:00 | SAKHALIN_STANDARD_TIME | "Sakhalin Standard Time" |
+| UTC +11:00 | CENTRAL_PACIFIC_STANDARD_TIME | "Central Pacific Standard Time" |
+| UTC +12:00 | RUSSIA_TIME_ZONE_11 | "Russia Time Zone 11" |
+| UTC +12:00 | NEW_ZEALAND_STANDARD_TIME | "New Zealand Standard Time" |
+| UTC +12:00 | UTC_12 | "UTC+12" |
+| UTC +12:00 | FIJI_STANDARD_TIME | "Fiji Standard Time" |
+| UTC +12:00 | KAMCHATKA_STANDARD_TIME | "Kamchatka Standard Time" |
+| UTC +12:45 | CHATHAM_ISLANDS_STANDARD_TIME | "Chatham Islands Standard Time" |
+| UTC +13:00 | TONGA__STANDARD_TIME | "Tonga Standard Time" |
+| UTC +13:00 | SAMOA_STANDARD_TIME | "Samoa Standard Time" |
+| UTC +14:00 | LINE_ISLANDS_STANDARD_TIME | "Line Islands Standard Time" |
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
Title: 'CLI (v2) schedule YAML schema'
-description: Reference documentation for the CLI (v2) schedule YAML schema.
+description: Reference documentation for the CLI (v2) job schedule YAML schema.
Last updated 05/17/2023
-# CLI (v2) schedule YAML schema
+# CLI (v2) job schedule YAML schema
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
Here, you'll create input variables to specify the input data, split ratio, lear
* In this sample, we access the data from a file on the internet. > [!NOTE]
-> To use [serverless compute (preview)](./how-to-use-serverless-compute.md), delete `compute="cpu-cluster"` in this code.
+> To use [serverless compute (preview)](./how-to-use-serverless-compute.md), delete `compute="cpu-cluster"` in this code. Serverless is the simplest way to run jobs on AzureML.
```python from azure.ai.ml import command
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
Previously updated : 04/10/2023 Last updated : 05/20/2023 # Azure Machine Learning Python SDK release notes
__RSS feed__: Get notified when this page is updated by copying and pasting the
`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2023-05-20
+
+### Azure Machine Learning SDK for Python v1.51.0
+ + **azureml-automl-core**
+ + AutoML forecasting task now supports rolling forecast and partial support for quantile forecasts for hierarchical time series (HTS).
+ + Disallow using non-tabular datasets to customers for Classification (multi-class and multi-label) scenarios
+ + **azureml-automl-dnn-nlp**
+ + Disallow using non-tabular datasets to customers for Classification (multi-class and multi-label) scenarios
+ + **azureml-contrib-automl-pipeline-steps**
+ + AutoML forecasting task now supports rolling forecast and partial support for quantile forecasts for hierarchical time series (HTS).
+ + **azureml-fsspec**
+ + Replaces all user caused errors in MLTable & FSSpec with a custom UserErrorException imported from azureml-dataprep.
+ + **azureml-interpret**
+ + updated azureml-interpret package to interpret-community 0.29.*
+ + **azureml-pipeline-core**
+ + Fix `pipeline_version` not taking effect when calling `pipeline_endpoint.submit()`.
+ + **azureml-train-automl-client**
+ + AutoML forecasting task now supports rolling forecast and partial support for quantile forecasts for hierarchical time series (HTS).
+ + **azureml-train-automl-runtime**
+ + AutoML forecasting task now supports rolling forecast and partial support for quantile forecasts for hierarchical time series (HTS).
+ + **mltable**
+ + Additional encoding variants like `utf-8` are now supported when loading MLTable files.
+ + Replaces all user caused errors in MLTable & FSSpec with a custom UserErrorException imported from azureml-dataprep.
+ ## 2023-04-10 ### Azure Machine Learning SDK for Python v1.50.0
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
The parameter `replicate_wild_ignore_table` creates a replication filter for tab
- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL, refer to how to configure binlog_expire_logs_seconds for [flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds) - If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter. - Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306.-- Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-- With public access, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-- With private access, ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running. (For more details, visit [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)).
+- With **public access**, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
+- With **private access**, ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running. (For more details, visit [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)).
## Next steps
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
The results should appear similar to the following. Make sure to note the binary
You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](../concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library.
-1. Set source server to read/write mode.
+2. Set source server to read/write mode.
After the database has been dumped, change the source MySQL server back to read/write mode.
The results should appear similar to the following. Make sure to note the binary
SET GLOBAL read_only = OFF; UNLOCK TABLES; ```
+[!NOTE]
+> Before the server is set back to read/write mode, you can retrieve the GTID information using global variable GTID_EXECUTED. The same will be used at the later stage to set GTID on the replica server
-1. Restore dump file to new server.
+3. Restore dump file to new server.
Restore the dump file to the server created in the Azure Database for MySQL - Flexible Server service. Refer to [Dump & Restore](../concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL - Flexible Server server from the virtual machine. > [!NOTE] > If you want to avoid setting the database to read only when you dump and restore, you can use [mydumper/myloader](../concepts-migrate-mydumper-myloader.md).
-## Retrieve gtid information from the source server dump
+## Set GTID in Replica Server
1. Skip the step if using bin-log position-based replication 2. GTID information from the dump file taken from the source is required to reset GTID history of the target (replica) server.
-3. GTID information from the source server can be retrieved using the following statement:
-
- ```sql
- show global variables like 'gtid_executedΓÇÖ;
- UNLOCK TABLES;
- ```
-4. Use this GTID information from the source to execute GTID reset on the replica server using the following CLI command:
+3. Use this GTID information from the source to execute GTID reset on the replica server using the following CLI command:
```azurecli-interactive
- az mysql flexible-server gtid reset --resource-group <resource group> --server-name <source server name> --gtid-set <gtid set from the source server> --subscription <subscription id>
+ az mysql flexible-server gtid reset --resource-group <resource group> --server-name <replica server name> --gtid-set <gtid set from the source server> --subscription <subscription id>
``` For more details refer [GTID Reset](/cli/azure/mysql/flexible-server/gtid).
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/data-residency.md
Title: Data residency for Azure Network Watcher
-description: This article will help you understand data residency for the Azure Network Watcher service.
+description: Learn about data residency for the Azure Network Watcher.
- Previously updated : 06/16/2021 Last updated : 05/25/2023 -++ # Data residency for Azure Network Watcher
-With the exception of the Connection Monitor service, Azure Network Watcher doesn't store customer data.
+Azure Network Watcher doesn't store customer data, except for the Connection monitor.
+
+## Connection monitor data residency
+
+Connection monitor stores customer data. This data is automatically stored by Network Watcher in a single region. So Connection Monitor automatically satisfies in-region data residency requirements, including requirements specified on the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
-## Connection Monitor data residency
-The Connection Monitor service stores customer data. This data is automatically stored by Network Watcher in a single region. So Connection Monitor automatically satisfies in-region data residency requirements, including requirements specified on the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+## Data residency in Azure
-## Data residency
-In Azure, the feature that enables storing customer data in a single region is currently available only in the Southeast Asia Region (Singapore) of the Asia Pacific geo and Brazil South (Sao Paulo State) Region of the Brazil geo. For all other regions, customer data is stored in Geo. For more information, see the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+In Azure, single region data residency is currently provided by default only in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see the [Data residency in Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency).
## Next steps
-* Read an overview of [Network Watcher](./network-watcher-monitoring-overview.md).
+To learn more about Network Watcher features and capabilities, see [Network Watcher overview](./network-watcher-monitoring-overview.md).
network-watcher Network Watcher Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-create.md
Network Watcher is a regional service that enables you to monitor and diagnose c
## Enable Network Watcher for your region
-You can enable Network Watcher for a region by creating a Network Watcher instance in that region. You can create a Network Watcher instance using the Azure portal, PowerShell, the Azure CLI, Azure Resource Manager (ARM) template or the REST API.
+You can enable Network Watcher for a region by creating a Network Watcher instance in that region. You can create a Network Watcher instance using the [Azure portal](?tabs=portal#enable-network-watcher-for-your-region), [PowerShell](?tabs=powershell#enable-network-watcher-for-your-region), the [Azure CLI](?tabs=cli#enable-network-watcher-for-your-region), [REST API](/rest/api/network-watcher/network-watchers/create-or-update), or Azure Resource Manager (ARM) template.
> [!NOTE] > Network Watcher is automatically enabled. When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. Automatically enabling Network Watcher doesn't affect your resources or associated charge.
You can enable Network Watcher for a region by creating a Network Watcher instan
1. Select **Add**.
- :::image type="content" source="./media/network-watcher-create/create-network-watcher.png" alt-text="Screenshot shows how to create a Network Watcher in the Azure portal.":::
+ :::image type="content" source="./media/network-watcher-create/create-network-watcher.png" alt-text="Screenshot shows how to create a Network Watcher in the Azure portal." lightbox="./media/network-watcher-create/create-network-watcher.png":::
> [!NOTE] > When you create a Network Watcher instance using the Azure portal:
New-AzNetworkWatcher -Name 'NetworkWatcher_eastus' -ResourceGroupName 'NetworkWa
Create a Network Watcher instance using [az network watcher configure](/cli/azure/network/watcher#az-network-watcher-configure) command: ```azurecli-interactive
+# Create a resource group for the Network Watcher instance (if it doesn't already exist).
+az group create --name 'NetworkWatcherRG' --location 'eastus'
+ # Create an instance of Network Watcher in East US region. az network watcher configure --resource-group 'NetworkWatcherRG' --locations 'eastus' --enabled ```
If you wish to customize the name of the Network Watcher instance, you can use [
## Disable Network Watcher for your region
-You can disable Network Watcher for a region by deleting the Network Watcher instance in that region. You can delete a Network Watcher instance using the Azure portal, PowerShell, the Azure CLI or the [REST API](/rest/api/network-watcher/network-watchers/delete).
+You can disable Network Watcher for a region by deleting the Network Watcher instance in that region. You can delete a Network Watcher instance using the [Azure portal](?tabs=portal#disable-network-watcher-for-your-region), [PowerShell](?tabs=powershell#disable-network-watcher-for-your-region), the [Azure CLI](?tabs=cli#disable-network-watcher-for-your-region), or [REST API](/rest/api/network-watcher/network-watchers/delete).
> [!WARNING] > Deleting a Network Watcher instance deletes all Network Watcher running operations, historical data, and alerts with no option to revert. For example, deleting `NetworkWatcher_eastus` instance deletes all Network Watcher running operations, data, and alerts in East US region.
You can disable Network Watcher for a region by deleting the Network Watcher ins
1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
-3. In the search results, select **Network Watcher**.
+1. In the **Overview** page, select the Network Watcher instances that you want to delete, then select **Disable**.
-4. In the **Overview** page, select the Network Watcher instances that you want to delete, then select **Disable**.
+ :::image type="content" source="./media/network-watcher-create/delete-network-watcher.png" alt-text="Screenshot shows how to delete a Network Watcher instance in the Azure portal." lightbox="./media/network-watcher-create/delete-network-watcher.png":::
- :::image type="content" source="./media/network-watcher-create/delete-network-watcher.png" alt-text="Screenshot shows how to delete a Network Watcher instance in the Azure portal.":::
+1. Enter *yes*, then select **Delete**.
-5. Enter *yes*, then select **Delete**.
-
- :::image type="content" source="./media/network-watcher-create/confirm-delete-network-watcher.png" alt-text="Screenshot showing the confirmation page before deleting a Network Watcher in the Azure portal.":::
+ :::image type="content" source="./media/network-watcher-create/confirm-delete-network-watcher.png" alt-text="Screenshot showing the confirmation page before deleting a Network Watcher in the Azure portal." lightbox="./media/network-watcher-create/confirm-delete-network-watcher.png":::
# [**PowerShell**](#tab/powershell)
Opting-out of Network Watcher automatic enablement isn't available in the Azure
To opt out of Network Watcher automatic enablement, use [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) cmdlet to register the `DisableNetworkWatcherAutocreation` feature for the `Microsoft.Network` resource provider. Then, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) cmdlet to register the `Microsoft.Network` resource provider. ```azurepowershell-interactive
-# Register the DisableNetworkWatcherAutocreation feature.
+# Register the "DisableNetworkWatcherAutocreation" feature.
Register-AzProviderFeature -FeatureName 'DisableNetworkWatcherAutocreation' -ProviderNamespace 'Microsoft.Network'
-# Register the Microsoft.Network resource provider.
+# Register the "Microsoft.Network" resource provider.
Register-AzResourceProvider -ProviderNamespace 'Microsoft.Network' ```
Register-AzResourceProvider -ProviderNamespace 'Microsoft.Network'
To opt out of Network Watcher automatic enablement, use [az feature register](/cli/azure/feature#az-feature-register) command to register the `DisableNetworkWatcherAutocreation` feature for the `Microsoft.Network` resource provider. Then, use [az provider register](/cli/azure/provider#az-provider-register) command to register the `Microsoft.Network` resource provider. ```azurecli-interactive
+# Register the "DisableNetworkWatcherAutocreation" feature.
az feature register --name 'DisableNetworkWatcherAutocreation' --namespace 'Microsoft.Network'+
+# Register the "Microsoft.Network" resource provider.
az provider register --name 'Microsoft.Network' ```
+## List Network Watcher instances
+
+You can view all regions where Network Watcher is enabled in your subscription by listing available Network Watcher instances in your subscription. Use the [Azure portal](?tabs=portal#list-network-watcher-instances), [PowerShell](?tabs=powershell#list-network-watcher-instances), the [Azure CLI](?tabs=cli#list-network-watcher-instances) or [REST API](/rest/api/network-watcher/network-watchers/list-all) to list Network Watcher instances in your subscription.
+
+# [**Portal**](#tab/portal)
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. In the **Overview** page, you can see all Network Watcher instances in your subscription.
+
+ :::image type="content" source="./media/network-watcher-create/list-network-watcher.png" alt-text="Screenshot shows how to list all Network Watcher instances in your subscription in the Azure portal." lightbox="./media/network-watcher-create/list-network-watcher.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+List all Network Watcher instances in your subscription using [Get-AzNetworkWatcher](/powershell/module/az.network/get-aznetworkwatcher).
+
+```azurepowershell-interactive
+# List all Network Watcher instances in your subscription.
+Get-AzNetworkWatcher
+```
+
+# [**Azure CLI**](#tab/cli)
+
+List all Network Watcher instances in your subscription using [az network watcher list](/cli/azure/network/watcher#az-network-watcher-list).
+
+```azurecli-interactive
+# List all Network Watcher instances in your subscription.
+az network watcher list --out table
+```
+++ ## Next steps To learn more about Network Watcher features, see:
operator-nexus How To Route Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-route-policy.md
+
+ Title: "Azure Operator Nexus: How to create route policy in Network Fabric"
+description: Learn to create, view, list, update, delete commands for Network Fabric.
++++ Last updated : 05/20/2023+++
+# Route Policy in Network Fabric
+
+Route policies provides Operators the capability to allow or deny routes in regards to Layer 3 isolation domains in Network Fabric.
+
+With route policies, routes are tagged with certain attributes via community values
+and extended community values when they're distributed via Border Gateway Patrol (BGP).
+Similarly, on the BGP listener side, route policies can be authored to discard/allow
+routes based on community values and extended community value attributes.
+
+Route policies enable operators to control routes learnt/distributed via BGP.
+Each route policy is modeled as a separate top level Azure Resource Manager (ARM) resource under
+`Microsoft.managednetworkfabric`.
+Operators can create, read, and delete route policy resources.
+The operator creates a route policy ARM resource and then sets the ID in the L3 isolation
+domain at the required enforcement point.
+A route policy can only be applied at a single enforcement point.
+A route policy can't be applied at multiple enforcement points.
+
+In a network fabric, route policies can be enforced at the following endpoints of a
+layer 3 isolation domain:
+
+**External networks** (option **A** and option **B**):
+
+For egress, set the `exportRoutePolicyId` property of the external network resource
+to the route policy resource ID created for egress direction.
+Set the `importRoutePolicyId` property of the external network resource to the
+route policy resource ID created for ingress direction.
+
+**Internal networks:**
+
+For egress, set the `exportRoutePolicyId` property of the internal network resource to the
+route policy resource ID created for egress direction.
+Set the `importRoutePolicyId` property of the internal network resource to the
+route policy resource ID created for ingress direction.
+
+**Connected subnets across all internal networks:**
+
+For egress, set the `connectedSubnetRoutePolicy` property of the L3 isolation domain
+to the route policy resource ID created for egress direction.
+
+## Conditions and actions of a route policy
+
+The following combinations of conditions can be specified:
+
+* _IP Prefix_
+* _IP community_
+* _Extended community list_
+
+### Actions
+
+The following actions can be specified when there's a match of conditions:
+
+* _Discard the route_
+* _Permit the route and apply one of the following specific actions_
+* _Add/Remove specified community values and extended community values_
+* _Overwrite specified community values and extended community values_
+
+## IP prefix
+
+IP prefixes are used in specifying match conditions for route policies.
+An IP prefix resource allows operators to manipulate routes based on the IP prefix (IPv4 and IPv6).
+The IP prefixes enable operators to drop certain prefixes from being propagated up-stream/down-stream or tag them with specific community or extended community values.
+The operator must create an ARM resource of the type IP-Prefix by providing a list of prefixes with sequence numbers and action.
+
+The prefixes in the list are processed in ascending order and the matching process stops after the first match. If the first match condition is "deny", the route is dropped and isn't propagated further. If the first match condition is "allow", further matching is aborted and the route is handled based on the action part of the route policies.
++
+IP prefixes specify only the match conditions of route policies. They don't specify the action part of route policies.
+
+### Parameters for IP prefix
+
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for the IP prefix of your choice | ResourceGroupName |True |
+| resource-name | Resource Name of the IP prefix | ipprefixv4-1204-cn1 |True |
+| location | Azure region used during NFC creation | eastus |True |
+| action | Action to be taken for the prefix ΓÇô Permit | Deny or Permit |True |
+| sequenceNumber | Sequence in which the prefixes are processed. Prefix lists are evaluated starting with the lowest sequence number and continue down the list until a match is made. Once a match is made, the permit or deny statement is applied to that network and the rest of the list is ignored | 100 |True |
+| networkPrefix | Network Prefix specifying IPv4/IPv6 packets to be permitted or denied. | 1.1.1.0/24 |True |
+| condition | Specified prefix list bounds- EqualTo \| GreaterThanOrEqualTo \| LesserThanOrEqualTo | EqualTo | |
+
+| subnetMaskLength | SubnetMaskLength specifies the minimum networkPrefix length to be matched. Required when condition is specified. | 32| |
+
+### Create IP Prefix
+
+This command creates an IP prefix resource with IPv4 prefix rules:
+
+```azurecli
+az nf ipprefix create \
+--resource-group "ResourceGroupName" \
+--resource-name "ipprefixv4-1204-cn1" \
+--location "eastus" \
+--ip-prefix-rules '[{"action": "Permit", "sequenceNumber": 10, "networkPrefix": "10.10.10.0/28", "condition": "EqualTo", "subnetMaskLength": 28}, {"action": "Permit", "sequenceNumber": 12, "networkPrefix": "20.20.20.0/24", "condition": "EqualTo", "subnetMaskLength": 24}]'
+```
+
+Expected output:
+
+```output
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxx-xxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv4-1204-cn1",
+ "ipPrefixRules": [
+ {
+ "action": "Permit",
+ "condition": "GreaterThanOrEqualTo",
+ "networkPrefix": "10.10.10.0/28",
+ "sequenceNumber": 10,
+ "subnetMaskLength": 28
+ }
+ ],
+ "location": "eastus",
+ "name": " ipprefixv4-1204-cn1",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:34:19.095543+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:34:19.095543+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipprefixes"
+}
+```
+
+This command creates an IP prefix resource with IPv6 prefix rules,
+
+```azurecli
+az nf ipprefix create \
+--resource-group "ResourceGroupName" \
+--resource-name "ipprefixv6-2701-cn1" \
+--location "eastus" \
+--ip-prefix-rules '[{"action": "Permit", "sequenceNumber": 10, "networkPrefix": "fda0:d59c:da12:20::/64", "condition": "GreaterThanOrEqualTo", "subnetMaskLength": 68}]'
+```
+
+Expected Output
+
+```output
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxx-xxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-cn1",
+ "ipPrefixRules": [
+ {
+ "action": "Permit",
+ "condition": "GreaterThanOrEqualTo",
+ "networkPrefix": "fda0:d59c:da12:20::/64",
+ "sequenceNumber": 10,
+ "subnetMaskLength": 68
+ }
+ ],
+ "location": "eastus",
+ "name": "ipprefixv6-2701-cn1",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:34:19.095543+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:34:19.095543+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipprefixes"
+}
+```
+
+## IP Community
+
+IP community resource allows operators to manipulate routes based on Community values tagged to routes. This community resource enables operators to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific community values. The operator must create an ARM resource of the type IP-Community. The operator specifies conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tags them with specific community values.
+
+### Parameters for IP community
+
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your IP prefix | ResourceGroupName |True |
+| resource-name | Resource Name of the IP-Prefix | ipprefixv4-1204-cn1 |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| action | Action to be taken for the IP community ΓÇô Permit | Deny or Permit |True |
+| wellKnownCommunities | Supported well known community list.`Internet` - Advertise routes to internet community. `LocalAS` - Advertise routes to only localAS peers. `NoAdvertise` - Don't advertise routes to any peer. `NoExport` - Don't export to next AS. `GShut` - Graceful Shutdown (GSHUT) withdraw routes before terminating BGP connection| LocalAS |True |
+| communityMembers | List the communityMembers of the IP community. The expected formats are "AA:nn" >> example "65535:65535", \<integer32> >> example 4294967040. The possible values of "AA:nn" is 0-65535, and of \<integer32> 1-4294967040. | 65535:65535 |True |
++
+> [!NOTE]
+> Either `wellKnownCommunities` or `communityMembers` parameter has to be passed for creating an IP community resource.
+
+### Create IP community
+
+This command creates an IP community resource:
+
+```azurecli
+az nf ipcommunity create \
+--resource-group "ResourceGroupName" \
+--resource-name "ipcommunity-2701" \
+--location "eastus" \
+--action "Permit" \
+--well-known-communities "Internet" "LocalAS" "GShut" \
+--community-members "65500:12701"
+```
+
+Expected output:
+
+```output
+{
+ "action": "Permit",
+ "annotation": null,
+ "communityMembers": [
+ "65500:12701"
+ ],
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-2701",
+ "location": "eastus",
+ "name": "ipcommunity-2701",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:48:15.472935+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:48:15.472935+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipcommunities",
+ "wellKnownCommunities": [
+ "Internet",
+ "LocalAS",
+ "GShut"
+ ]
+}
+```
+
+### Show IP community
+
+This command displays an IP community resource:
+
+```azurecli
+az nf ipcommunity show --resource-group "ResourceGroupName" --resource-name "ipcommunity-2701"
+
+```
+
+Expected output:
+
+```output
+{
+ "action": "Permit",
+ "annotation": null,
+ "communityMembers": [
+ "65500:12701"
+ ],
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-2701",
+ "location": "eastus",
+ "name": "ipcommunity-2701",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:48:15.472935+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:48:15.472935+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipcommunities",
+ "wellKnownCommunities": [
+ "Internet",
+ "LocalAS",
+ "GShut"
+ ]
+}
+```
+
+## IP extended community
+
+The `IPExtendedCommunity`resource allows operators to manipulate routes based on route targets. Operators use it to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific extended community values. The operator must create an ARM resource of the type `I`PExtendedCommunityList` by providing a list of community values and specific properties. ExtendedCommunityLists are used in specifying match conditions and the action properties for route policies.
+
+### Parameters for IP extended community
+
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your IP prefix | ResourceGroupName |True |
+| resource-name | Resource Name of the ipPrefix | ipprefixv4-1204-cn1 |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| action | Action to be taken for the IP extended community ΓÇô Permit | Deny or Permit |True |
+| routeTargets | Route Target List. The expected formats are "ASN(plain):nn" >> example "4294967294:50", "ASN.ASN:nn" >> example "65533.65333:40", "IP-address:nn" >> example "10.10.10.10:65535". The possible values of "nn" are within "0-65535" range, and "ASN(plain)" within "0-4294967295" range. | "1234:5678" |True |
+
+### Create IP extended community
+
+This command creates an IP extended community resource:
+
+```azurecli
+az nf ipextendedcommunity create \
+--resource-group "ResourceGroupName" \
+--resource-name "ipextcommunity-2701" \
+--location "eastus" \
+--action "Permit" \
+--route-targets "65046:45678"
+```
+
+Expected output:
+
+```output
+{
+ "action": "Permit",
+ "annotation": null,
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipExtendedCommunities/ipextcommunity-2701",
+ "location": "eastus",
+ "name": "ipextcommunity-2701",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "routeTargets": [
+ "65046:45678"
+ ],
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:52:30.385929+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:52:30.385929+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipextendedcommunities"
+}
+```
+
+### Show IP extended community
+
+This command displays an IP extended community resource:
+
+```azurecli
+az nf ipextendedcommunity show --resource-group "ResourceGroupName" --resource-name "ipextcommunity-2701"
+```
+
+Expected output:
+
+```output
+{
+ "action": "Permit",
+ "annotation": null,
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipExtendedCommunities/ipextcommunity-2701",
+ "location": "eastus",
+ "name": "ipextcommunity-2701",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "routeTargets": [
+ "65046:45678"
+ ],
+ "systemData": {
+ "createdAt": "2023-XX-XXT09:52:30.385929+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT09:52:30.385929+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/ipextendedcommunities"
+}
+```
+
+## Route policy
+
+Route policy resource enables an operator to specify conditions and actions based on IP prefixes, IP community list and IP extended community lists. Each route policy consists of multiple statements. Each statement consists of a sequence number, conditions, and actions. The conditions can be combinations of IP prefixes, IP communities, and IP extended communities and are applied in ascending order of sequence numbers. The action corresponding to the first matched condition is executed. If the conditions that matched has deny as action, the route is discarded and no further processing takes place. If the action in the Route policy corresponding to the matched condition is "Permit", the following combinations of actions are allowed:
+
+* Updating local preference
+* Add/delete or Set of IpCommunityLists
+* Add/delete or Set of IpExtendedCommunityLists
+
+### Parameters for Route policy
+
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your IP prefix | ResourceGroupName |True |
+| resource-name | Resource Name of the IP-Prefix | ipprefixv4-1204-cn1 |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| statements | List of one or more route Policy statements | |True |
+| sequenceNumber | Sequence in which route policy statements are processed. Statements are evaluated starting with the lowest sequence number and continue down the list until a match condition is met. Once a match is made, the action is applied and the rest of the list is ignored | 1 |True |
+| condition | Route policy condition properties. That contains a list of IP community ARM IDs or ipExtendedCommmunicty ARM IDs or ipPrefix ARM ID. One of the three(ipCommunityIds, ipCommunityIds, ipPrefixId) is required in a condition. If more than one is specified, the condition is matched if any one of the resources has a match. | 1234:5678 |True |
+| ipCommunityIds | List of IP community resource IDs | |False|
+| ipExtendedCommunityIds | List of IPExtendedCommunity resource IDs | |False|
+| ipPrefixId | Arm Resource ID of IpPrefix | |False|
+| action | Route policy action properties. This property describes the action to be performed if there's a match of the condition in the statement. At least one of localPreference, ipCommunityProperties, or ipExtendedCommunityProperties needs to be enabled | Permit |True |
+| localPreference | Local preference to be set as part of action | 10 |False |
+| ipCommunityProperties | Details of IP communities that need to be added, removed, or set as part of action | |False|
+| add | Applicable when the action is to add IP communities or IP extended communities | ||
+| delete | Applicable when the action is to delete IP communities or IP extended communities | ||
+| set | Applicable when the action is to set IP communities or IP extended communities | ||
+| ipCommunityIds | IP community ARM resource Ids that need to be added or deleted or set | ||
+| ipExtendedCommunityProperties | Details of IP Extended communities that need to be added, removed, or set as part of action | ||
+| ipExtendedCommunityIDs | IP extended community ARM resource Ids that need to be added or deleted or set | ||
+
+### Create route policy
+
+This command creates route policies:
+
+```azurecli
+az nf routepolicy create \
+--resource-group "ResourceGroupName" \
+--resource-name "rcf-Fab3-l3domain-v6-connsubnet-ext-policy" \
+--location "eastus" \
+--statements '[ \{"sequenceNumber": 10, "condition":{"ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-staticsubnet"}, \
+ "action": {"actionType": "Permit", "ipCommunityProperties": {"set": \
+ {"ipCommunityIds": ["/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-2701-staticsubnet"]}}}}, \
+ {"sequenceNumber": 30, "condition":{"ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-connsubnet"}, \
+ "action": {"actionType": "Permit", "ipCommunityProperties": {"set": \
+ {"ipCommunityIds": ["/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-connsubnet-2701"]}}}},\
+]'
+```
+
+Expected output:
+
+```output
+{
+ "annotation": null,
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/routePolicies/rcf-Fab3-l3domain-v6-connsubnet-ext-policy",
+ "location": "eastus",
+ "name": "rcf-Fab3-l3domain-v6-connsubnet-ext-policy",
+ "provisioningState": "Accepted",
+ "resourceGroup": "ResourceGroupName",
+ "statements": [
+ {
+ "action": {
+ "actionType": "Permit",
+ "ipCommunityProperties": {
+ "add": null,
+ "delete": null,
+ "set": {
+ "ipCommunityIds": [
+ "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-2701-staticsubnet"
+ ]
+ }
+ },
+ "ipExtendedCommunityProperties": null,
+ "localPreference": null
+ },
+ "annotation": null,
+ "condition": {
+ "ipCommunityIds": null,
+ "ipExtendedCommunityIds": null,
+ "ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-staticsubnet"
+ },
+ "sequenceNumber": 10
+ },
+ {
+ "action": {
+ "actionType": "Permit",
+ "ipCommunityProperties": {
+ "add": null,
+ "delete": null,
+ "set": {
+ "ipCommunityIds": [
+ "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-connsubnet-2701"
+ ]
+ }
+ },
+ "ipExtendedCommunityProperties": null,
+ "localPreference": null
+ },
+ "annotation": null,
+ "condition": {
+ "ipCommunityIds": null,
+ "ipExtendedCommunityIds": null,
+ "ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-connsubnet"
+ },
+ "sequenceNumber": 30
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-XX-XXT10:10:21.123560+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT10:10:21.123560+00:00",
+ "lastModifiedBy": "user@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/routepolicies"
+}
+```
+
+### Show route policy
+
+This command displays route policies:
+
+```Azurecli
+az nf routepolicy show --resource-group "ResourceGroupName" --resource-name "rcf-Fab3-l3domain-v6-connsubnet-ext-policy"
+```
+
+Expected output:
+
+```Output
+{
+ "annotation": null,
+ "id": "/subscriptions/9531faa8-8c39-4165-b033-48697fe943db/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/routePolicies/rcf-Fab3-l3domain-v6-connsubnet-ext-policy",
+ "location": "eastus",
+ "name": "rcf-Fab3-l3domain-v6-connsubnet-ext-policy",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "ResourceGroupName",
+ "statements": [
+ {
+ "action": {
+ "actionType": "Permit",
+ "ipCommunityProperties": {
+ "add": null,
+ "delete": null,
+ "set": {
+ "ipCommunityIds": [
+ "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-2701-staticsubnet"
+ ]
+ }
+ },
+ "ipExtendedCommunityProperties": null,
+ "localPreference": null
+ },
+ "annotation": null,
+ "condition": {
+ "ipCommunityIds": null,
+ "ipExtendedCommunityIds": null,
+ "ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-staticsubnet"
+ },
+ "sequenceNumber": 10
+ },
+ {
+ "action": {
+ "actionType": "Permit",
+ "ipCommunityProperties": {
+ "add": null,
+ "delete": null,
+ "set": {
+ "ipCommunityIds": [
+ "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipCommunities/ipcommunity-connsubnet-2701"
+ ]
+ }
+ },
+ "ipExtendedCommunityProperties": null,
+ "localPreference": null
+ },
+ "annotation": null,
+ "condition": {
+ "ipCommunityIds": null,
+ "ipExtendedCommunityIds": null,
+ "ipPrefixId": "/subscriptions/<subscription-id>/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/ipPrefixes/ipprefixv6-2701-connsubnet"
+ },
+ "sequenceNumber": 30
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-XX-XXT10:10:21.123560+00:00",
+ "createdBy": "user@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-XX-XXT10:10:21.123560+00:00",
+ "lastModifiedBy": "user@addresscom",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/routepolicies"
+}
+```
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
When there is a major disaster event such as availability zone-level or regional
This section summarizes considerations about the read replica feature. The following considerations do apply. -- **Power operations**: Power operations (start/stop) are currently not supported for any node, either replica or primary, in the replication cluster.
+- **Power operations**: [Power operations](how-to-stop-start-server-portal.md), including start and stop actions, can be applied to both the primary server and its replica servers. However, to preserve system integrity, a specific sequence should be followed. Prior to stopping the read replicas, ensure the primary server is stopped first. When commencing operations, initiate the start action on the replica servers before starting the primary server.
- If server has read replicas then read replicas should be deleted first before deleting the primary server. - [In-place major version upgrade](concepts-major-version-upgrade.md) in Azure Database for PostgreSQL requires removing any read replicas that are currently enabled on the server. Once the replicas have been deleted, the primary server can be upgraded to the desired major version. After the upgrade is complete, you can recreate the replicas to resume the replication process.
A read replica is created as a new Azure Database for PostgreSQL server. An exis
During creation of read replicas firewall rules and data encryption method can be changed. Server parameters and authentication method are inherited from the primary server and cannot be changed during creation. After a replica is created, several settings can be changed including storage, compute, backup retention period, server parameters, authentication method, firewall rules etc.
+### Resource move
+Moving replica(s) to another resource group or subscription, as well as the primary that has read replica(s) is not currently supported.
+ ### Replication slot issues mitigation In rare cases, high lag caused by replication slots can lead to an increase in storage usage on the primary server due to the accumulation of WAL files. If the storage usage reaches 95% or the available capacity falls below 5 GiB, the server automatically switches to read-only mode to prevent disk-full errors.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server. * Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions. * General availability: [Pgvector extension](how-to-use-pgvector.md) for Azure Database for PostgreSQL - Flexible Server.
+* General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL- Flexible server
## Release: April 2023 * Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
route-server Monitor Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/monitor-route-server.md
Title: Monitor Azure Route Server
-description: Learn how to monitor Azure Route Server metrics using Azure Monitor.
+description: Learn how to monitor your Azure Route Server using Azure Monitor and understand available metrics.
Previously updated : 05/16/2022- Last updated : 05/25/2023+ # Monitor Azure Route Server This article helps you understand Azure Route Server monitoring and metrics using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting and diagnostic logs across all of Azure.
->[!NOTE]
->Using **Classic Metrics** is not recommended.
->
+> [!NOTE]
+> Using **Classic Metrics** is not recommended.
## Route Server metrics To view Azure Route Server metrics, go to your Route Server resource in the Azure portal and select **Metrics**.
-Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
+Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
:::image type="content" source="./m