Updates from: 07/15/2023 01:10:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Previously updated : 05/29/2023 Last updated : 07/13/2023
Define your application and service architecture, inventory current systems, and
| Move on-premises dependencies to the cloud | To help ensure a resilient solution, consider moving existing application dependencies to the cloud. | | Migrate existing apps to b2clogin.com | The deprecation of login.microsoftonline.com will go into effect for all Azure AD B2C tenants on 04 December 2020. [Learn more](b2clogin.md). | | Use Identity Protection and Conditional Access | Use these capabilities for significantly greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). |
-|Tenant size | You need to plan with Azure AD B2C tenant size in mind. By default, Azure AD B2C tenant can accommodate 1.25 million objects (user accounts and applications). You can increase this limit to 5.25 million objects by adding a custom domain to your tenant, and verifying it. If you need a bigger tenant size, you need to contact [Support](find-help-open-support-ticket.md).|
+|Tenant size | You need to plan with Azure AD B2C tenant size in mind. By default, Azure AD B2C tenant can accommodate 1 million objects (user accounts and applications). You can increase this limit to 5 million objects by adding a custom domain to your tenant, and verifying it. If you need a bigger tenant size, you need to contact [Support](find-help-open-support-ticket.md).|
| Use Identity Protection and Conditional Access | Use these capabilities for greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). | ## Implementation
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Previously updated : 06/23/2023 Last updated : 07/13/2023
Before you create your Azure AD B2C tenant, you need to take the following consi
- You can create up to **20** tenants per subscription. This limit help protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). -- By default, each tenant can accommodate a total of **1.25 million** objects (user accounts and applications), but you can increase this limit to **5.25 million** objects when you add and verify a custom domain. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). However, if you created your tenant before **September 2022**, this limit doesn't affect you, and your tenant will retain the size allocated to it at creation, that's, **50 million** objects. Learn how to [read your tenant usage](microsoft-graph-operations.md#tenant-usage).
+- By default, each tenant can accommodate a total of **1 million** objects (user accounts and applications), but you can increase this limit to **5 million** objects when you add and verify a custom domain. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). However, if you created your tenant before **September 2022**, this limit doesn't affect you, and your tenant will retain the size allocated to it at creation, that's, **50 million** objects. Learn how to [read your tenant usage](microsoft-graph-operations.md#tenant-usage).
- If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-) before you try again. You require a role of at least *Subscription Administrator*. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
If none of the above scenarios apply, there's no need to validate the token, and
APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, you can't validate tokens for Microsoft Graph according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
-If the application needs to validate an ID token or an access token, it should first validate the signature of the token and the issuer against the values in the OpenID discovery document. For example, the tenant-independent version of the document is located at [https://login.microsoftonline.com/common/.well-known/openid-configuration](https://login.microsoftonline.com/common/.well-known/openid-configuration).
+If the application needs to validate an ID token or an access token, it should first validate the signature of the token and the issuer against the values in the OpenID discovery document.
The Azure AD middleware has built-in capabilities for validating access tokens, see [samples](sample-v2-code.md) to find one in the appropriate language. There are also several third-party open-source libraries available for JWT validation. For more information about Azure AD authentication libraries and code samples, see the [authentication libraries](reference-v2-libraries.md).
+### Validate the issuer
+
+[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the iss (issuer) Claim." For applications which use a tenant-specific metadata endpoint (like [https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/{example-tenant-id}/v2.0/.well-known/openid-configuration) or [https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration)), this is all that is needed.
+Azure AD makes available a tenant-independent version of the document for multi-tenant apps at [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration). This endpoint returns an issuer value `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use this tenant-independent endpoint to validate tokens from every tenant with the following modifications:
+
+ 1. Instead of expecting the issuer claim in the token to exactly match the issuer value from metadata, the application should replace the `{tenantid}` value in the issuer metadata with the tenant ID that is the target of the current request, and then check the exact match.
+
+ 1. The application should use the `issuer` property returned from the keys endpoint to restrict the scope of keys.
+ - Keys that have an issuer value like `https://login.microsoftonline.com/{tenantid}/v2.0` may be used with any matching token issuer.
+ - Keys that have an issuer value like `https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0` should only be used with exact match.
+ Azure AD's tenant-independent key endpoint ([https://login.microsoftonline.com/common/discovery/v2.0/keys](https://login.microsoftonline.com/common/discovery/v2.0/keys)) returns a document like:
+ ```
+ {
+ "keys":[
+ {"kty":"RSA","use":"sig","kid":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","x5t":"jS1Xo1OWDj_52vbwGNgvQO2VzMc","n":"spv...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
+ {"kty":"RSA","use":"sig","kid":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","x5t":"2ZQpJ3UpbjAYXYGaXEJl8lV0TOI","n":"wEM...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/{tenantid}/v2.0"},
+ {"kty":"RSA","use":"sig","kid":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","x5t":"yreX2PsLi-qkbR8QDOmB_ySxp8Q","n":"rv0...","e":"AQAB","x5c":["MIID..."],"issuer":"https://login.microsoftonline.com/9188040d-6c67-4c5b-b112-36a304b66dad/v2.0"}
+ ]
+ }
+ ```
+
+1. Applications that use Azure AD's tenant ID (`tid`) claim as a trust boundary instead of the standard issuer claim should ensure that the tenant-id claim is a GUID and that the issuer and tenant ID match.
+
+Using tenant-independent metadata is more efficient for applications which accept tokens from many tenants.
+> [!NOTE]
+> With Azure AD tenant-independent metadata, claims should be interpreted within the tenant, just as under standard OpenID Connect, claims are interpreted within the issuer. That is, `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{example-tenant-id}/v2.0","tid":"{example-tenant-id}"}` and `{"sub":"ABC123","iss":"https://login.microsoftonline.com/{another-tenand-id}/v2.0","tid":"{another-tenant-id}"}` describe different users, even though the `sub` is the same, because claims like `sub` are interpreted within the context of the issuer/tenant.
+ ### Validate the signature A JWT contains three segments separated by the `.` character. The first segment is the **header**, the second is the **body**, and the third is the **signature**. Use the signature segment to evaluate the authenticity of the token.
The server possibly revokes refresh tokens due to a change in credentials, or du
| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User or admin revokes the refresh tokens by using [PowerShell](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| User or admin revokes the refresh tokens by using [PowerShell](/powershell/module/microsoft.graph.beta.users.actions/invoke-mgbetainvalidateuserrefreshtoken?view=graph-powershell-beta&preserve-view=true) | Revoked | Revoked | Revoked | Revoked | Revoked |
| [Single sign-out](v2-protocols-oidc.md#single-sign-out) on web | Revoked | Stays alive | Revoked | Stays alive | Stays alive | #### Non-password-based
active-directory Sample Daemon Dotnet Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-daemon-dotnet-call-api.md
+
+ Title: Call an API in a sample .NET daemon application
+description: Learn how to configure a sample .NET daemon application that calls an API protected with Azure Active Directory (Azure AD) for customers
++++++++ Last updated : 07/13/2023+
+#Customer intent: As a dev, devops, I want to configure a sample .NET daemon application that calls an API protected by Azure Active Directory (Azure AD) for customers tenant
++
+# Call an API in a sample .NET daemon application
+
+This article uses a sample .NET daemon application to show you how a daemon application acquires a token to call a protected web API. Azure Active Directory (Azure AD) for customers protects the Web API.
+
+A daemon application acquires a token on behalf of itself (not on behalf of a user). Users can't interact with a daemon application because it requires its own identity. This type of application requests an access token by using its application identity and presenting its application ID, credential (password or certificate), and application ID URI to Azure AD.
+
+## Prerequisites
+
+- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, [sign up for a free trial](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl)</a>.
+
+## Register a daemon application and a web API
+
+In this step, you create the daemon and the web API application registrations, and you specify the scopes of your web API.
+
+### Register a web API application
++
+### Configure application roles
++
+### Configure optional claims
++
+### Register the daemon application
++
+### Create a client secret
++
+### Grant API permissions to the daemon application
++
+## Clone or download sample daemon application and web API
+
+To get the web application sample code, you can do either of the following tasks:
+
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/archive/refs/heads/main.zip) or clone the sample web application from GitHub by running the following command:
+
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial.git
+ ```
+If you choose to download the *.zip* file, extract the sample application file to a folder where the total length of the path is 260 or fewer characters.
+
+## Configure the sample daemon application and API
+
+To use your app registration in the client web application sample:
+
+1. In your code editor, open *ms-identity-ciam-dotnet-tutorial/2-Authorization/3-call-own-api-dotnet-core-daemon/ToDoListClient/appsettings.json* file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the daemon application you registered earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+ - `Enter_the_Client_Secret_Here` and replace it with the daemon application secret value you copied earlier.
+
+ - `Enter_the_Web_Api_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied earlier.
+
+To use your app registration in the web API sample:
+
+1. In your code editor, open *ms-identity-ciam-dotnet-tutorial/2-Authorization/3-call-own-api-dotnet-core-daemon/ToDoListAPI/appsettings.json* file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied.
+
+ - `Enter_the_Tenant_Id_Here` and replace it with the Directory (tenant) ID you copied earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+## Run and test sample daemon application and API
+
+1. Open a console window, then run the web API by using the following commands:
+
+ ```console
+ cd 2-Authorization\3-call-own-api-dotnet-core-daemon\ToDoListAPI
+ dotnet run
+ ```
+1. Run the daemon client by using the following commands:
+
+ ```console
+ cd 2-Authorization\3-call-own-api-dotnet-core-daemon\ToDoListClient
+ dotnet run
+ ```
+
+If your daemon application and web API successfully run, you should see something similar to the following JSON array in your console window
+
+```bash
+Posting a to-do...
+Retrieving to-do's from server...
+To-do data:
+ID: 1
+User ID: 41b1e1a8-8e51-4514-8dab-e568afa2826c
+Message: Bake bread
+Posting a second to-do...
+Retrieving to-do's from server...
+To-do data:
+ID: 1
+User ID: 41b1e1a8-8e51-4514-8dab-e568afa2826c
+Message: Bake bread
+ID: 2
+User ID: 41b1e1a8-8e51-4514-8dab-e568afa2826c
+Message: Butter bread
+Deleting a to-do...
+Retrieving to-do's from server...
+To-do data:
+ID: 2
+User ID: 41b1e1a8-8e51-4514-8dab-e568afa2826c
+Message: Butter bread
+Editing a to-do...
+Retrieving to-do's from server...
+To-do data:
+ID: 2
+User ID: 41b1e1a8-8e51-4514-8dab-e568afa2826c
+Message: Eat bread
+Deleting remaining to-do...
+Retrieving to-do's from server...
+There are no to-do's in server
+```
+
+## How it works
+
+The daemon application use [OAuth2.0 client credentials grant](../../develop/v2-oauth2-client-creds-grant-flow.md) to acquire an access token for itself and not for the user. The access token that the app requests contains the permissions represented as roles. The client credential flow uses this set of permissions in place of user scopes for application tokens. You [exposed these application permissions](#configure-application-roles) in the web API earlier, then [granted them to the daemon app](#grant-api-permissions-to-the-daemon-application). The daemon app in this article uses [Microsoft Authentication Library for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) to simplify the process of acquiring a token.
+
+On the API side, the web API must verify that the access token has the required permissions (application permissions). The web API rejects access tokens that don't have the required permissions.
+
+## See also
+
+See the tutorial on how to [build your own .NET daemon app that calls an API](./tutorial-daemon-dotnet-call-api-prepare-tenant.md)
active-directory Tutorial Daemon Dotnet Call Api Build App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-dotnet-call-api-build-app.md
+
+ Title: "Tutorial: Call a protected web API from your .NET daemon application"
+description: Learn about how to call a protected web API from your .NET client daemon app.
++++++++ Last updated : 07/13/2023++
+# Tutorial: Call a protected web API from your .NET daemon application
+
+In this tutorial, you build your client daemon app and call a protected web API. You enable the client daemon app to acquire an access token using its own identity, then call the web API.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Configure a daemon app to use it's app registration details.
+> - Build a daemon app that acquires a token on its own behalf and calls a protected web API.
+
+## Prerequisite
+
+Before continuing with this tutorial, ensure you have all of the following items in place:
+
+- Registration details for the daemon app and web API you created in the [prepare app tutorial](tutorial-daemon-dotnet-call-api-prepare-tenant.md). You need the following details:
+
+ - The Application (client) ID of the client daemon app that you registered.
+ - The Directory (tenant) subdomain where you registered your daemon app.
+ - The secret value for the daemon app you created.
+ - The Application (client) ID of the web API app you registered.
+
+- A protected *ToDoList* web API that is running and ready to accept requests. If you haven't created one, see the [create a protected web API tutorial](how-to-protect-web-api-dotnet-core-overview.md). Ensure this web API is using the app registration details you created in the [prepare app tutorial](tutorial-daemon-dotnet-call-api-prepare-tenant.md).
+- The base url and port on which the web API is running. For example, 44351. Ensure the API exposes the following endpoints via https:
+
+ - `GET /api/todolist` to get all todos.
+ - `POST /api/todolist` to add a todo.
+- [.NET 7.0](https://dotnet.microsoft.com/download/dotnet/7.0) or later.
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+## 1. Create a .NET daemon app
+
+1. Open your terminal and navigate to the folder where you want your project to live.
+1. Initialize a .NET console app and navigate to its root folder.
+
+ ```dotnetcli
+ dotnet new console -n ToDoListClient
+ cd ToDoListClient
+ ```
+
+## 2. Install packages
+
+Install `Microsoft.Identity.Web` and `Microsoft.Identity.Web.DownstreamApi` packages:
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web
+dotnet add package Microsoft.Identity.Web.DownstreamApi
+```
+
+`Microsoft.Identity.Web` provides the glue between ASP.NET Core, the authentication middleware, and the Microsoft Authentication Library (MSAL) for .NET making it easier for you to add authentication and authorization capabilities to your app. `Microsoft.Identity.Web.DownstreamApi` provides an interface used to call a downstream API.
+
+## 3. Create appsettings.json file an add registration configs
+
+1. Create *appsettings.json* file in the root folder of the app.
+1. Add app registration details to the *appsettings.json* file.
+
+ ```json
+ {
+ "AzureAd": {
+ "Authority": "https://<Enter_the_Tenant_Subdomain_Here>.ciamlogin.com/",
+ "ClientId": "<Enter_the_Application_Id_here>",
+ "ClientCredentials": [
+ {
+ "SourceType": "ClientSecret",
+ "ClientSecret": "<Enter_the_Client_Secret_Here>"
+ }
+ ]
+ },
+ "DownstreamApi": {
+ "BaseUrl": "<Web_API_base_url>",
+ "RelativePath": "api/todolist",
+ "RequestAppToken": true,
+ "Scopes": [
+ "api://<Enter_the_Web_Api_Application_Id_Here>/.default"
+ ]
+ }
+ }
+ ```
+
+ Replace the following values with your own:
+
+ | Value | Description |
+ |--|-|
+ |*Enter_the_Application_Id_Here*| The Application (client) ID of the client daemon app that you registered. |
+ |*Enter_the_Tenant_Subdomain_Here*| The Directory (tenant) subdomain. |
+ |*Enter_the_Client_Secret_Here*| The daemon app secret value you created. |
+ |*Enter_the_Web_Api_Application_Id_Here*| The Application (client) ID of the web API app you registered. |
+ |*Web_API_base_url*| The base URL of the web API. For example, `https://localhost:44351/` where 44351 is the port number of the port your API is running on. Your API should already be running and awaiting requests by this stage for you to get this value.|
+
+## 4. Add models
+
+Navigate to the root of your project folder and create a *models* folder. In the *models* folder, create a *ToDo.cs* file and add the following code:
+
+```csharp
+using System;
+
+namespace ToDoListClient.Models;
+
+public class ToDo
+{
+ public int Id { get; set; }
+ public Guid Owner { get; set; }
+ public string Description { get; set; } = string.Empty;
+}
+```
+
+## 5. Acquire access token
+
+You have now configured the required items in for your daemon application. In this step, you write the code that enables the daemon app to acquire an access token.
+
+1. Open the *program.cs* file in your code editor and delete its contents.
+1. Add your packages to the file.
+
+ ```csharp
+ using Microsoft.Extensions.DependencyInjection;
+ using Microsoft.Identity.Abstractions;
+ using Microsoft.Identity.Web;
+ using ToDoListClient.Models;
+ ```
+1. Create the token acquisition instance. Use the `GetDefaultInstance` method of the `TokenAcquirerFactory` class of `Microsoft.Identity.Web` package to build the token acquisition instance. By default, the instance reads an *appsettings.json* file if it exists in the same folder as the app. `GetDefaultInstance` also allows us to add services to the service collection.
+
+ Add this line of code to the *program.cs* file:
+
+ ```csharp
+ var tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+ ```
+
+1. Configure the application options to be read from the configuration and add the `DownstreamApi` service. The `DownstreamApi` service provides an interface used to call a downstream API. We call this service *DownstreamAPI* in the config object. The daemon app reads the downstream API configs from the *DownstreamApi* section of *appsettings.json*. By default, you get an in-memory token cache.
+
+ Add the following code snippet to the *program.cs* file:
+
+ ```csharp
+ const string ServiceName = "DownstreamApi";
+
+ tokenAcquirerFactory.Services.AddDownstreamApi(ServiceName,
+ tokenAcquirerFactory.Configuration.GetSection("DownstreamApi"));
+
+1. Build the token acquirer. This composes all the services you have added to Services and returns a service provider. Use this service provider to get access to the API resource you have added. In this case, you added only one API resource as a downstream service that you want access to.
+
+ Add the following code snippet to the *program.cs* file:
+
+ ```csharp
+ var serviceProvider = tokenAcquirerFactory.Build();
+ ```
+
+## 6. Call the web API
+
+Add code to call your protected web API using the `IDownstreamApi` interface. In this tutorial, you only implement a call to Post a todo and another one to Get all todos. See the other implementations such as Delete and Put in the [sample code](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/blob/main/2-Authorization/3-call-own-api-dotnet-core-daemon/ToDoListClient/Program.cs).
+
+Add this line of code to the *program.cs* file:
+
+```csharp
+var toDoApiClient = serviceProvider.GetRequiredService<IDownstreamApi>();
+
+Console.WriteLine("Posting a to-do...");
+
+var firstNewToDo = await toDoApiClient.PostForAppAsync<ToDo, ToDo>(
+ ServiceName,
+ new ToDo()
+ {
+ Owner = Guid.NewGuid(),
+ Description = "Bake bread"
+ });
+
+await DisplayToDosFromServer();
+
+async Task DisplayToDosFromServer()
+{
+ Console.WriteLine("Retrieving to-do's from server...");
+ var toDos = await toDoApiClient!.GetForAppAsync<IEnumerable<ToDo>>(
+ ServiceName,
+ options => options.RelativePath = "/api/todolist"
+ );
+
+ if (!toDos!.Any())
+ {
+ Console.WriteLine("There are no to-do's in server");
+ return;
+ }
+
+ Console.WriteLine("To-do data:");
+
+ foreach (var toDo in toDos!) {
+ DisplayToDo(toDo);
+ }
+}
+
+void DisplayToDo(ToDo toDo) {
+ Console.WriteLine($"ID: {toDo.Id}");
+ Console.WriteLine($"User ID: {toDo.Owner}");
+ Console.WriteLine($"Message: {toDo.Description}");
+}
+```
+
+## 7. Run the client daemon app
+
+Navigate to the root folder of the daemon app and run the following command:
+
+```dotnetcli
+dotnet run
+```
+
+If everything is okay, you should see the following output in your terminal.
+
+```bash
+Posting a to-do...
+Retrieving to-do's from server...
+To-do data:
+ID: 1
+User ID: f4e54f8b-acec-4ef4-90e9-5bb358c8770b
+Message: Bake bread
+```
+
+### Troubleshoot
+
+In case you run into errors,
+
+- Confirm the registration details you added to the appsettings.json file.
+- Confirm that you're calling the web API via the correct port and over https.
+- Confirm that your app permissions are configured correctly.
+
+The full sample code is [available on GitHub](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/tree/main/2-Authorization/3-call-own-api-dotnet-core-daemon).
+
+## 8. Clean up resources
+
+If you don't intend to use the apps you have registered and created in this tutorial, delete them to avoid incurring any costs.
+
+## See also
+
+- [Call an API in a sample Node.js daemon application](./sample-daemon-node-call-api.md)
active-directory Tutorial Daemon Dotnet Call Api Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-daemon-dotnet-call-api-prepare-tenant.md
+
+ Title: "Tutorial: Register and configure .NET daemon app authentication details in a customer tenant"
+description: Learn about how to prepare your Azure Active Directory (Azure AD) for customers tenant to acquire an access token using client credentials flow in your .NET daemon application
++++++++ Last updated : 07/13/2023++
+# Tutorial: Register and configure .NET daemon app authentication details in a customer tenant
+
+The first step in securing your applications is to register them. In this tutorial, you prepare your Azure Active Directory (Azure AD) for customers tenant for authorization. This tutorial is part of a series that guides you to develop a .NET daemon app that calls your own custom protected web API using Azure AD for customers.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Register a web API and configure app permissions the Microsoft Entra admin center.
+> - Register a client daemon application and grant it app permissions in the Microsoft Entra admin center
+> - Create a client secret for your daemon application in the Microsoft Entra admin center.
+
+## 1. Register a web API application
++
+## 2. Configure app roles
++
+## 3. Configure optional claims
++
+## 4. Register the daemon app
++
+## 5. Create a client secret
++
+## 6. Grant API permissions to the daemon app
++
+## 6. Pick your registration details
+
+The next step after this tutorial is to build a daemon app that calls your web API. Ensure you have the following details:
+
+- The Application (client) ID of the client daemon app that you registered.
+- The Directory (tenant) subdomain where you registered your daemon app.
+- The secret value for the daemon app you created.
+- The Application (client) ID of the web API app you registered.
+
+## Next steps
+
+In the next tutorial, you configure your daemon and web API applications.
+
+> [!div class="nextstepaction"]
+> [Prepare your daemon application >](tutorial-daemon-dotnet-call-api-build-app.md)
active-directory Tshoot Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-password-hash-synchronization.md
password hash synchronization for this on-premises Active Directory account fail
#### User has a temporary password
-Currently, Azure AD Connect does not support synchronizing temporary passwords with Azure AD. A password is considered to be temporary if the **Change password at next logon** option is set on the on-premises Active Directory user. The following error is returned:
+Older versions of Azure AD Connect did not support synchronizing temporary passwords with Azure AD. A password is considered to be temporary if the **Change password at next logon** option is set on the on-premises Active Directory user. The following error is returned with these older versions:
![Temporary password is not exported](./media/tshoot-connect-password-hash-synchronization/phssingleobjecttemporarypassword.png)
+To enable synchonization of temporary passwords you must have Azure AD Connect version 2.0.3.0 or higher installed and the feature [ForcePasswordChangeOnLogon](../connect/how-to-connect-password-hash-synchronization.md#synchronizing-temporary-passwords-and-force-password-change-on-next-logon) must be enabled.
+ #### Results of last attempt to synchronize password aren't available By default, Azure AD Connect stores the results of password hash synchronization attempts for seven days. If there are no results available for the selected Active Directory object, the following warning is returned:
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Last updated 01/26/2023
-zone_pivot_groups: enterprise-apps-minus-aad-powershell
+zone_pivot_groups: enterprise-apps-minus-former-powershell
#Customer intent: As an administrator of an Azure AD tenant, I want to configure the properties of an enterprise application.
active-directory Application List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-list.md
Previously updated : 01/07/2022 Last updated : 07/14/2023
When filtered to **All Applications**, the **All Applications** **List** shows e
- When you add a new application registration by creating a custom-developed application using the [Application Registry](../develop/quickstart-register-app.md) - When you add a new application registration by creating a custom-developed application using the [V2.0 Application Registration portal](../develop/quickstart-register-app.md) - When you add an application, youΓÇÖre developing using Visual StudioΓÇÖs [ASP.NET Authentication Methods](https://www.asp.net/visual-studio/overview/2013/creating-web-projects-in-visual-studio#orgauthoptions) or [Connected Services](https://devblogs.microsoft.com/visualstudio/connecting-to-cloud-services/)-- When you create a service principal object using the [Azure AD PowerShell Module](/powershell/azure/active-directory/install-adv2)
+- When you create a service principal object using the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) module.
- When you [consent to an application](../develop/howto-convert-app-to-be-multi-tenant.md) as an administrator to use data in your tenant - When a [user consents to an application](../develop/howto-convert-app-to-be-multi-tenant.md) to use data in your tenant - When you enable certain services that store data in your tenant. One example is Password Reset, which is modeled as a service principal to store your password reset policy securely.
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
Last updated 01/26/2023
-zone_pivot_groups: enterprise-apps-minus-aad-powershell
+zone_pivot_groups: enterprise-apps-minus-former-powershell
#Customer intent: As an Azure AD administrator, I want to assign owners to enterprise applications.
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Last updated 04/19/2023
-zone_pivot_groups: enterprise-apps-minus-aad-powershell
+zone_pivot_groups: enterprise-apps-minus-former-powershell
#customer intent: As an admin, I want to configure how end-users consent to applications.
active-directory Datawiza Sso Mfa To Owa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-sso-mfa-to-owa.md
+
+ Title: Configure Datawiza Access Proxy for Microsoft Entra SSO and MFA for Outlook Web Access
+description: Learn how to configure Datawiza Access Proxy for Microsoft Entra SSO and MFA for Outlook Web Access
+++++ Last updated : 07/14/2023+++++
+# Configure Datawiza Access Proxy for Microsoft Entra ID single sign-on and multi-factor authentication for Outlook Web Access
+
+In this tutorial, learn how to configure Datawiza Access Proxy (DAP) to enable Microsoft Entra ID single sign-on (SSO) and Microsoft Entra ID Multi-factor Authentication (MFA) for Outlook Web Access (OWA). Help solve issues when modern identity providers (IdPs) integrate with legacy OWA, which supports Kerberos token authentication to identify users.
+
+Often, legacy app and modern SSO integration are a challenge because there's no modern protocol support. Datawiza Access Proxy removes the protocol support
+gap, reduces integration overhead, and improves application security.
+
+Integration benefits:
+
+- Improved Zero Trust security with SSO, MFA, and Conditional Access:
+
+ - See, [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust)
+
+ - See, [What is Conditional Access?](../conditional-access/overview.md)
+
+- No-code integration with Microsoft Entra ID and web apps:
+
+ - OWA
+
+ - Oracle JD Edwards
+
+ - Oracle E-Business Suite
+
+ - Oracle Siebel
+
+ - Oracle PeopleSoft
+
+ - Your apps
+
+ - See, [Easy authentication and authorization in Microsoft Entra ID with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/)
+
+- Use the Datawiza Cloud Management Console (DCMC) to manage access to cloud and on-premises apps:
+
+ - Go to
+ [login.datawizwa.com](https://login.datawiza.com/df3f213b-68db-4966-bee4-c826eea4a310/b2c_1a_linkage/oauth2/v2.0/authorize?client_id=4f011d0f-44d4-4c42-ad4c-88c7bbcd1ac8&scope=https%3A%2F%2Fdatawizab2cprod.onmicrosoft.com%2F4f011d0f-44d4-4c42-ad4c-88c7bbcd1ac8%2FReadWrite.All%20openid%20profile%20offline_access&redirect_uri=https%3A%2F%2Fconsole.datawiza.com%2Fhome&client-request-id=3c20ca19-1dc7-4226-b2cf-fab4d7af3929&response_mode=fragment&response_type=code&x-client-SKU=msal.js.browser&x-client-VER=2.14.2&x-client-OS=&x-client-CPU=&client_info=1&code_challenge=hz6u_I8Z04mD8zz-olLBSXJ_OI1T2-Evy699ff0O8Ik&code_challenge_method=S256&nonce=80f15c2b-ff10-40a8-a48c-a2533fb2b8d9&state=eyJpZCI6ImY1NzEyZTcyLTBiZTItNGJjMC1hMmExLTYzNjE3NzYyMGU1OSIsIm1ldGEiOnsiaW50ZXJhY3Rpb25UeXBlIjoicmVkaXJlY3QifX0%3D)
+ to sign in or sign up for an account
+
+## Architecture
+
+DAP integration architecture includes the following components:
+
+- **Microsoft Entra ID** - identity and access management service that helps users sign in and access external and internal resources
+
+- **OWA** - the legacy, Exchange Server component to be protected by Microsoft Entra ID
+
+- **Domain controller** - a server that manages user authentication and access to network resources in a Windows-based network
+
+- **Key distribution center (KDC)** - distributes and manages secret keys and tickets in a Kerberos authentication system
+
+- **DAP** - a reverse-proxy that implements Open ID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign in. DAP integrates with protected applications by using:
+
+ - HTTP headers
+
+ - Kerberos
+
+ - JSON web token (JWT)
+
+ - other protocols
+
+- **DCMC** - the DAP management console with UI and RESTful APIs to manage configurations and access control policies
+
+The following diagram illustrates a user flow with DAP in a customer
+network.
+
+![Screenshot shows the user flow with DAP in a customer network.](media/datawiza-access-proxy/datawiza-architecture.png)
+
+The following diagram illustrates the user flow from user browser to OWA.
+
+![Screenshot shows the user flow from user browser to owa.](media/datawiza-access-proxy/datawiza-flow-diagram.png)
+
+| Step | Description |
+|:-|:|
+| 1. | User browser requests access to DAP-protected OWA.|
+| 2. | The user browser is directed to Azure AD.|
+| 3. | The Microsoft Entra ID sign in page appears.|
+| 4.| The user enters credentials.|
+| 5.| Upon authentication, the user browser is directed to DAP.|
+| 6. | DAP and Azure AD exchange tokens.|
+| 7. | Azure AD issues the username and relevant information to DAP.|
+| 8.| DAP accesses the KDC with credentials. DAP requests a Kerberos ticket.|
+| 9.| KDC returns a Kerberos ticket.|
+|10.| DAP redirects the user browser to OWA.|
+| 11.| The OWA resource appears.|
+
+>[!NOTE]
+>Subsequent user browser requests contain the Kerberos token, which enables access to OWA via DAP.
+
+## Prerequisites
+
+You need the following components. Prior DAP experience isn't necessary.
+
+- An Azure account
+
+ - If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+
+- An Azure AD tenant linked to the Azure account
+
+ - See, [Quickstart: Create a new tenant in Azure AD](../fundamentals/active-directory-access-create-new-tenant.md)
+
+- Docker and Docker Compose are required to run DAP
+
+ - See, [Get Docker](https://docs.docker.com/get-docker/)
+
+ - See, Install Docker Compose, [Overview](https://docs.docker.com/compose/install/)
+
+- User identities synchronized from an on-premises directory to Microsoft Entra ID, or created in Microsoft Entra ID and flowed back to your on-premises
+ directory
+
+ - See, [Azure AD Connect sync: Understand and customize
+ synchronization](../hybrid/how-to-connect-sync-whatis.md)
+
+- An account with Microsoft Entra ID Application Administrator permissions
+
+ - See, Application Administrator and other roles on, [Microsoft Entra ID built-in
+ roles](../roles/permissions-reference.md)
+
+- An Exchange Server environment. Supported versions:
+
+ - Microsoft IIS Integrated Windows Authentication (IWA) - IIS 7 or later
+
+ - Microsoft OWA IWA - IIS 7 or later
+
+- A Windows Server instance configured with IIS and Microsoft Entra ID Services running as a domain controller (DC) and implementing
+ Kerberos (IWA) SSO
+
+ - It's unusual for large production environments to have an application server (IIS) that also functions as a DC.
+
+- **Optional** - an SSL Web certificate to publish services over HTTPS, or DAP self-signed certificates, for testing.
+
+## Enable Kerberos authentication for OWA
+
+1. Sign in to the [Exchange admin center](https://admin.exchange.microsoft.com/).
+
+2. In the Exchange admin center, left navigation, select **servers**.
+
+3. Select the **virtual directories** tab.
+
+ ![Screenshot shows the virtual directories.](media/datawiza-access-proxy/virtual-directories.png)
+
+4. From the **select server** dropdown, select a server.
+
+5. Double-click **owa (Default Web Site)**.
+
+6. In the **Virtual Directory**, select the **authentication** tab.
+
+ ![Screenshot shows the virtual directories authentication tab.](media/datawiza-access-proxy/authentication-tab.png)
+
+7. On the authentication tab, select **Use one or more standard authentication methods**, and then select **Integrated Windows authentication**.
+
+8. Select **save**
+
+ ![Screenshot shows the internet-explorer tab.](media/datawiza-access-proxy/internet-explorer.png)
+
+9. Open a command prompt.
+
+10. Execute the **iisreset** command.
+
+ ![Screenshot shows the iis reset command.](media/datawiza-access-proxy/iis-reset.png)
+
+## Create a DAP service account
+
+DAP requires known Windows credentials that are used by the instance to configure the Kerberos service. The user is the DAP service account.
+
+1. Sign in to the Windows Server instance.
+
+2. Select **Active Directory Users and Computers**.
+
+3. Select the DAP instance down-arrow. The example is **datawizatest.com**.
+
+4. In the list, right-click **Users**.
+
+5. From the menu, select **New**, then select **User**.
+
+ ![Screenshot shows the users-computers.](media/datawiza-access-proxy/users-computers.png)
+
+6. On **New Object--User**, enter a **First name** and **Last name**.
+
+7. For **User logon name**, enter **dap**.
+
+8. Select **Next**.
+ ![Screenshot shows the user-logon.](media/datawiza-access-proxy/user-logon.png)
+
+9. In **Password**, enter a password.
+
+10. Enter it again in **Confirm**.
+
+11. Check the boxes for **User cannot change password** and **Password never expires**.
+ ![Screenshot shows the password menu.](media/datawiza-access-proxy/password.png)
+
+12. Select **Next**.
+
+13. Right-click the new user to see the configured properties.
+
+## Create a service principal name for the service account
+
+Before you create the service principal name (SPN), you can list SPNs and confirm the http SPN is among them.
+
+1. Use the following syntax on the Windows command line to list SPNs.
+
+ `setspn -Q \*/\<**domain.com**`
+
+2. Confirm the http SPN is among them.
+
+3. Use the following syntax on the Windows command line to register the host SPN for the account.
+
+ `setspn -A host/dap.datawizatest.com dap`
+
+>[!NOTE]
+>`host/dap.datawizatest.com` is the unique SPN, and dap is the service account you created.
+
+## Configure Windows Server IIS for Constrained Delegation
+
+1. Sign in to a domain controller (DC).
+
+2. Select **Active Directory Users and Computers.**
+
+ ![Screenshot shows the constrained delegation menu.](media/datawiza-access-proxy/constrained-delegation.png)
+
+3. In your organization, locate and select the **Users** object.
+
+4. Locate the service account you created.
+
+5. Right-click the account.
+
+6. From the list, select **Properties**.
+
+ ![Screenshot shows the properties.](media/datawiza-access-proxy/properties.png)
+
+7. Select the **Delegation** tab.
+
+8. Select **Trust this user for delegation to specified services only**.
+
+9. Select **Use any authentication protocol**.
+
+10. Select **Add**.
+
+ ![Screenshot shows the authentication protocol.](media/datawiza-access-proxy/authentication-protocol.png)
+
+11. On **Add Services**, select **Users or Computers.**
+
+ ![Screenshot shows the add services window.](media/datawiza-access-proxy/add-services.png)
+
+12. In **Enter the object names to select**, type in the machine name.
+
+13. Select **OK**
+
+ ![Screenshot shows the select object names fields.](media/datawiza-access-proxy/object-names.png)
+
+14. On **Add Services**, in Available services, under Service Type, select **http.**
+
+15. Select **OK**
+
+ ![Screenshot shows the add http services fields.](media/datawiza-access-proxy/add-http-services.png)
+
+## Integrate OWA with Microsoft Entra ID
+
+Use the following instructions to integrate OWA with Microsoft Entra ID.
+
+1. Sign in to the [Datawiza Cloud Management Console](https://console.datawiza.com/) (DCMC).
+
+2. The Welcome page appears.
+
+3. Select the orange **Getting started** button.
+
+ ![Screenshot shows the access proxy screen.](media/datawiza-access-proxy/access-proxy.png)
+
+### Deployment Name
+
+1. On **Deployment Name**, type a **Name** and a **Description**.
+
+2. Select **Next**.
+
+ ![Screenshot shows the deployment name screen.](media/datawiza-access-proxy/deployment-name.png)
+
+### Add Application
+
+1. On **Add Application**, for **Platform**, select **Web**.
+
+2. For **App name**, enter the app name. We recommend a meaningful naming convention.
+
+3. For **Public Domain**, enter the app's external-facing URL. For example, `https://external.example.com`. Use localhost DNS for
+ testing.
+
+4. For **Listen Port**, enter the port DAP listens on. If DAP isn't deployed behind a load balancer, you can use port indicated in
+ Public Domain.
+
+5. For **Upstream Servers**, enter the OWA implementations' URL and port combination.
+
+6. Select **Next**.
+
+ ![Screenshot shows the add application screen.](media/datawiza-access-proxy/add-application.png)
+
+### Configure IdP
+
+DCMC integration features help complete Microsoft Entra ID configuration. Instead, DCMC calls Microsoft Graph API to perform the tasks. The feature reduces
+time, effort, and errors.
+
+1. On **Configure IdP**, enter a **Name**.
+
+2. For **Protocol**, select **OIDC**.
+
+3. For **Identity Provider**, select **Microsoft Azure Active Directory**.
+
+4. Enable **Automatic Generator**.
+
+5. For **Supported account types**, select **Account in this organizational directory only (Single tenant)**.
+
+6. Select **Create**.
+
+ ![Screenshot shows the configure idp screen.](media/datawiza-access-proxy/configure-idp.png)
+
+7. A page appears with deployment steps for DAP and the application.
+
+8. See the deployment's Docker Compose file, which includes an image of the DAP, also **PROVISIONING_KEY** and **PROVISIONING_SECRET.** DAP
+ uses the keys to pull the latest DCMC configuration and policies.
+
+### Configure Kerberos
+
+1. On your application page, select **Application Detail**.
+
+2. Select the **Advanced** tab.
+
+3. On the **Kerberos** sub tab, enable **Kerberos**.
+
+4. For **Kerberos Realm**, enter the location where the Kerberos database is stored, or the Active Directory domain.
+
+5. For **SPN**, enter the OWA application's service principal name. It's not the same SPN you created.
+
+6. For **Delegated Login Identity**, enter the applications external-facing URL. Use localhost DNS for testing.
+
+7. For **KDC**, enter a domain controller IP. If DNS is configured, enter a fully qualified domain name (FQDN).
+
+8. For **Service Account**, enter the service account you created.
+
+9. For **Auth Type**, select **Password**.
+
+10. Enter a service account **Password**.
+
+11. Select **Save**.
+
+ ![Screenshot shows the configure kerberos.](media/datawiza-access-proxy/kerberos-details.png)
+
+### SSL configuration
+
+1. On your application page, select the **Advanced** tab.
+
+2. Select the **SSL** subtab.
+
+3. Select **Edit**.
+
+ ![Screenshot shows the datawiza advanced window.](media/datawiza-access-proxy/datawiza-access-proxy.png)
+
+4. Select the option to **Enable SSL**.
+
+5. From **Cert Type**, select a certificate type. You can use the
+ provided self-signed localhost certificate for testing.
+
+ ![Screenshot shows the cert type.](media/datawiza-access-proxy/cert-type.png)
+
+6. Select **Save**.
+
+## Optional: Enable Microsoft Entra ID Multi-Factor Authentication
+
+To provide more sign-in security, you can enforce Microsoft Entra ID Multi-Factor Authentication. The process starts in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
+
+2. Select **Azure Active Directory**.
+
+3. Select **Manage**
+
+4. Select **Properties**
+
+5. Under **Tenant properties**, select **Manage security defaults**
+
+ ![Screenshot shows the manage security defaults.](media/datawiza-access-proxy/manage-security-defaults.png)
+
+6. For **Enable Security defaults**, select **Yes**
+
+7. Select **Save**
+
+## Next steps
+
+- [Video: Enable SSO and MFA for Oracle JD Edwards with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90)
+
+- [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](datawiza-with-azure-ad.md)
+
+- Go to docs.datawiza.com for [Datawiza user guides](https://docs.datawiza.com/)
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
zone_pivot_groups: enterprise-apps-all
# Disable user sign-in for an application
-There may be situations while configuring or managing an application where you don't want tokens to be issued for an application. Or, you may want to block an application that you don't want your employees to try to access. To block user access to an application, you can disable user sign-in for the application, which will prevent all tokens from being issued for that application.
+There may be situations while configuring or managing an application where you don't want tokens to be issued for an application. Or, you may want to block an application that you don't want your employees to try to access. To block user access to an application, you can disable user sign-in for the application, which prevents all tokens from being issued for that application.
-In this article, you'll learn how to prevent users from signing in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you're looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
+In this article, you learn how to prevent users from signing in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you're looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
[!INCLUDE [portal updates](../includes/portal-update.md)]
To disable user sign-in, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: An administrator, or owner of the service principal.
-## Disable how a user signs in
+## Disable user sign-in
:::zone pivot="portal"
To disable user sign-in, you need:
:::zone pivot="aad-powershell"
-You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using the following Azure AD PowerShell cmdlet.
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
Ensure you've installed the AzureAD module (use the command `Install-Module -Name AzureAD`). In case you're prompted to install a NuGet module or the new Azure AD V2 PowerShell module, type Y and press ENTER.
if ($servicePrincipal) {
:::zone pivot="ms-powershell"
-You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
Ensure you've installed the Microsoft Graph module (use the command `Install-Module Microsoft.Graph`).
else { $servicePrincipal = New-MgServicePrincipal -AppId $appId ΓÇôAccountEnabl
:::zone pivot="ms-graph"
-You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using Microsoft Graph explorer.
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being preauthorized by Microsoft. You can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
To disable sign-in to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
-You'll need to consent to the `Application.ReadWrite.All` permission.
+You need to consent to the `Application.ReadWrite.All` permission.
Run the following query to disable user sign-in to an application.
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
-zone_pivot_groups: enterprise-apps-minus-aad-powershell
+zone_pivot_groups: enterprise-apps-minus-former-powershell
#customer intent: As an admin, I want to grant tenant-wide admin consent to an application in Azure AD.
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Please see [Restore permissions granted to applications](restore-permissions.md)
You can access the Azure portal to view the permissions granted to an app. You can revoke permissions granted by admins for your entire organization, and you can get contextual PowerShell scripts to perform other actions.
-To revoke application permissions granted for the entire organization:
+To revoke an application's permissions that have been granted for the entire organization:
1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the prerequisites section. 1. Select **Azure Active Directory**, and then select **Enterprise applications**.
To revoke application permissions granted for the entire organization:
1. Select **Permissions**. 1. The permissions listed in the **Admin consent** tab apply to your entire organization. Choose the permission you would like to remove, select the **...** control for that permission, and then choose **Revoke permission**.
-To review application permissions:
+To review an application's permissions:
1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the prerequisites section. 1. Select **Azure Active Directory**, and then select **Enterprise applications**.
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Attribute mappings allow you to define how data should flow between the source t
1. On the **Attribute Mapping** page, scroll down to review the user attributes that are synchronized between tenants in the **Attribute Mappings** section.
- The attributes selected as **Matching** properties are used to match the user accounts between tenants and avoid creating duplicates.
+ The first attribute, alternativeSecurityIdentifier, is an internal attribute used to uniquely identify the user across tenants, match users in the source tenant with existing users in the target tenant, and ensure that each user only has one account. The matching attribute cannot be changed. Attempting to change the matching attribute will result in a `schemaInvalid` error.
:::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping.png" alt-text="Screenshot of the Attribute Mapping page that shows the list of Azure Active Directory attributes." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping.png":::
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
Previously updated : 05/31/2023 Last updated : 06/16/2023
Which clouds can cross-tenant synchronization be used in?
- Synchronization is only supported between two tenants in the same cloud. - Cross-cloud (such as public cloud to Azure Government) isn't currently supported.
+#### Existing B2B users
+
+Will cross-tenant synchronization manage existing B2B users?
+
+- Yes. Cross-tenant synchronization uses an internal attribute called the alternativeSecurityIdentifier to uniquely match an internal user in the source tenant with an external / B2B user in the target tenant. Cross-tenant synchronization can update existing B2B users, ensuring that each user has only one account.
+- Cross-tenant synchronization cannot match an internal user in the source tenant with an internal user in the target tenant (both type member and type guest).
+ #### Synchronization frequency How often does cross-tenant synchronization run?
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 03/31/2023 Last updated : 06/16/2023
Use the following table to better understand how to resolve errors that you find
> |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.| > |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.| > |InvitationCreationFailureAmbiguousUser| The invited user has a proxy address that matches an internal user in the target tenant. The proxy address must be unique. | To resolve this error, delete the existing internal user in the target tenant or remove this user from sync scope.|-
+> |AzureActiveDirectoryCannotUpdateObjectsMasteredOnPremises|If the user in the target tenant was originally synchronized from AD to Azure AD and converted to an external user, the source of authority is still on-premises and the user cannot be updated.|The user cannot be updated by cross-tenant synchronization|
## Next steps * [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md)
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
Previously updated : 11/01/2022 Last updated : 07/14/2023
To use this feature, you need:
![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png) 9. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
+
+> [!NOTE]
+> The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
+ 10. Select **Save** to save the setting. 11. Close the window to return to the Diagnostic settings pane.
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
In addition to providing global SLA performance, Azure AD now provides tenant-le
To access your tenant-level SLA performance: 1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com) using the Reports Reader role (or higher).
-1. Go to **Azure AD** and select **Scenario Health** from the side menu.
+1. Go to **Azure AD**, select **Monitoring & health**, then select **Scenario Health** from the side menu.
1. Select the **SLA Monitoring** tab. 1. Hover over the graph to see the SLA performance for that month.
active-directory Reference Reports Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md
+ # Azure Active Directory data retention In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory (Azure AD).
In this article, you learn about the data retention policies for the different a
| Azure AD Edition | Collection Start | | :-- | :-- |
-| Azure AD Premium P1 <br /> Azure AD Premium P2 | When you sign up for a subscription |
+| Azure AD Premium P1 <br /> Azure AD Premium P2 <br /> Entra Workload Identities Premium | When you sign up for a subscription |
| Azure AD Free| The first time you open [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) | If you already have activities data with your free license, then you can see it immediately on upgrade. If you donΓÇÖt have any data, then it will take up to three days for the data to show up in the reports after you upgrade to a premium license. For security signals, the collection process starts when you opt-in to use the **Identity Protection Center**.
You can retain the audit and sign-in activity data for longer than the default r
| Risky sign-ins | 7 days | 30 days | 90 days | > [!NOTE]
-> Risky users are not deleted until the risk has been remediated.
-
-## Can I see last month's data after getting an Azure AD premium license?
+> Risky users and workload identities are not deleted until the risk has been remediated.
+## Can I see last month's data after getting a premium license?
**No**, you can't. Azure stores up to seven days of activity data for a free version. When you switch from a free to a premium version, you can only see up to 7 days of data.
You can retain the audit and sign-in activity data for longer than the default r
- [Stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) - [Learn how to download Azure AD logs](howto-download-logs.md)+
active-directory Sap Analytics Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* An OAuth client with authorization grant Client Credentials in SAP Analytics Cloud. To learn how, see: [Managing OAuth Clients and Trusted Identity Providers](https://help.sap.com/viewer/00f68c2e08b941f081002fd3691d86a7/release/en-US/4f43b54398fc4acaa5efa32badfe3df6.html) > [!NOTE]
-> This integration is also available to use from Microsoft Entra ID US Government Cloud environment. You can find this application in the Microsoft Entra ID US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+> This integration is also available to use from Microsoft Entra ID US Government Cloud environment. You can follow the steps below and configure it in the same way as you do from public cloud.
## Step 1. Plan your provisioning deployment
active-directory Tanium Cloud Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-cloud-sso-tutorial.md
- Title: Azure Active Directory SSO integration with Tanium Cloud SSO
-description: Learn how to configure single sign-on between Azure Active Directory and Tanium Cloud SSO.
-------- Previously updated : 03/29/2023----
-# Azure Active Directory SSO integration with Tanium Cloud SSO
-
-In this article, you learn how to integrate Tanium Cloud SSO with Azure Active Directory (Azure AD). Tanium, the industryΓÇÖs only provider of converged endpoint management (XEM), leads the paradigm shift in legacy approaches to managing complex security and technology environments. When you integrate Tanium Cloud SSO with Azure AD, you can:
-
-* Control in Azure AD who has access to Tanium Cloud SSO.
-* Enable your users to be automatically signed-in to Tanium Cloud SSO with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-You'll configure and test Azure AD single sign-on for Tanium Cloud SSO in a test environment. Tanium Cloud SSO supports both **SP** and **IDP** initiated single sign-on and also **Just In Time** user provisioning.
-
-## Prerequisites
-
-To integrate Azure Active Directory with Tanium Cloud SSO, you need:
-
-* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Tanium Cloud SSO single sign-on (SSO) enabled subscription.
-
-## Add application and assign a test user
-
-Before you begin the process of configuring single sign-on, you need to add the Tanium Cloud SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
-
-### Add Tanium Cloud SSO from the Azure AD gallery
-
-Add Tanium Cloud SSO from the Azure AD application gallery to configure single sign-on with Tanium Cloud SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
-
-### Create and assign Azure AD test user
-
-Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
-
-## Configure Azure AD SSO
-
-Complete the following steps to enable Azure AD single sign-on in the Azure portal.
-
-1. In the Azure portal, on the **Tanium Cloud SSO** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-
-1. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Identifier** textbox, type a URL using the following pattern:
- `urn:amazon:cognito:sp:InstanceName`
-
- b. In the **Reply URL** textbox, type a URL using the following pattern:
- `https://<InstanceName>-tanium.auth.<SUBDOMAIN>.amazoncognito.com/saml2/idpresponse`
-
-1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
-
- In the **Sign on URL** textbox, type a URL using the following pattern:
- `https://<InstanceName>.cloud.tanium.com`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium Cloud SSO Client support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-
- ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
-
-## Configure Tanium Cloud SSO
-
-To configure single sign-on on **Tanium Cloud SSO** side, you need to send the **App Federation Metadata Url** to [Tanium Cloud SSO support team](mailto:integrations@tanium.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Tanium Cloud SSO test user
-
-In this section, a user called B.Simon is created in Tanium Cloud SSO. Tanium Cloud SSO supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Tanium Cloud SSO, a new one is created after authentication.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Tanium Cloud SSO Sign-on URL where you can initiate the login flow.
-
-* Go to Tanium Cloud SSO Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Tanium Cloud SSO for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Tanium Cloud SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Tanium Cloud SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
-
-* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
-
-## Next steps
-
-Once you configure Tanium Cloud SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tanium Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Tanium SSO
+description: Learn how to configure single sign-on between Azure Active Directory and Tanium SSO.
++++++++ Last updated : 07/14/2023++++
+# Azure Active Directory SSO integration with Tanium SSO
+
+In this article, you learn how to integrate Tanium SSO with Azure Active Directory (Azure AD). Tanium, the industryΓÇÖs only provider of converged endpoint management (XEM), leads the paradigm shift in legacy approaches to managing complex security and technology environments. When you integrate Tanium SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Tanium SSO.
+* Enable your users to be automatically signed-in to Tanium SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Tanium SSO in a test environment. Tanium SSO supports both **SP** and **IDP** initiated single sign-on and also **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Tanium SSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tanium SSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Tanium SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Tanium SSO from the Azure AD gallery
+
+Add Tanium SSO from the Azure AD application gallery to configure single sign-on with Tanium SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Tanium SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ [ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ](common/edit-urls.png#lightbox)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:amazon:cognito:sp:<InstanceName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<InstanceName>-tanium.auth.<SUBDOMAIN>.amazoncognito.com/saml2/idpresponse`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<InstanceName>.cloud.tanium.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium SSO support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ [ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ](common/copy-metadataurl.png#lightbox)
+
+## Configure Tanium SSO
+
+To configure single sign-on on **Tanium SSO** side, you need to send the **App Federation Metadata Url** to [Tanium SSO support team](mailto:integrations@tanium.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Tanium SSO test user
+
+In this section, a user called B.Simon is created in Tanium SSO. Tanium SSO supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Tanium SSO, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Tanium SSO Sign-on URL where you can initiate the login flow.
+
+* Go to Tanium SSO Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Tanium SSO for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Tanium SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Tanium SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Tanium SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks App Routing Nginx Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-prometheus.md
+
+ Title: Monitor the ingress-nginx controller metrics in the application routing add-on with Prometheus (preview)
+description: Configure Prometheus to scrape the ingress-nginx controller metrics.
+++++ Last updated : 07/12/2023+++
+# Monitor the ingress-nginx controller metrics in the application routing add-on with Prometheus in Grafana (preview)
+
+The ingress-nginx controller in the application routing add-on exposes many metrics for requests, the nginx process, and the controller that can be helpful in analyzing the performance and usage of your application.
+
+The application routing add-on exposes the Prometheus metrics endpoint at `/metrics` on port 10254.
++
+## Prerequisites
+
+- An Azure Kubernetes Service (AKS) cluster with the [application routing add-on enabled][app-routing].
+- A Prometheus instance, such as [Azure Monitor managed service for Prometheus][managed-prometheus-configure].
+- A Grafana instance, such as [Azure Managed Grafana][managed-grafana].
+
+## Validating the metrics endpoint
+
+To validate the metrics are being collected, you can set up a port forward to one of the ingress-nginx controller pods.
+
+```bash
+kubectl get pods -n app-routing-system
+```
+
+```bash
+NAME READY STATUS RESTARTS AGE
+external-dns-667d54c44b-jmsxm 1/1 Running 0 4d6h
+nginx-657bb8cdcf-qllmx 1/1 Running 0 4d6h
+nginx-657bb8cdcf-wgcr7 1/1 Running 0 4d6h
+```
+
+Now forward a local port to port 10254 on one of the nginx pods.
+
+```bash
+kubectl port-forward nginx-657bb8cdcf-qllmx -n app-routing-system :10254
+```
+
+```bash
+Forwarding from 127.0.0.1:43307 -> 10254
+Forwarding from [::1]:43307 -> 10254
+```
+
+Note the local port (`43307` in this case) and open http://localhost:43307/metrics in your browser. You should see the ingress-nginx controller metrics loading.
+
+![Screenshot of the Prometheus metrics in the browser.](./media/app-routing/prometheus-metrics.png)
+
+You can now terminate the `port-forward` process to close the forwarding.
+
+## Configuring Azure Monitor managed service for Prometheus and Azure Managed Grafana using Container Insights
+
+Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This service requires configuring the metrics addon for the Azure Monitor agent, which sends data to Prometheus. If your cluster isn't configured with the add-on, you can follow this article to [configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus][managed-prometheus-configure] and send the collected metrics to [an Azure Managed Grafana instance][create-grafana].
+
+### Enable pod annotation based scraping
+
+Once your cluster is updated with the Azure Monitor agent, you need to configure the agent to enable scraping based on Pod annotations, which are added to the ingress-nginx pods. One way to set this setting is in the [`ama-metrics-settings-configmap`](https://aka.ms/azureprometheus-addon-settings-configmap) ConfigMap in the `kube-system` namespace.
+
+> [!CAUTION]
+> This will replace your existing `ama-metrics-settings-configmap` ConfigMap in the `kube-system`. If you already have a configuration, you may want to take a backup or merge it with this configuration.
+>
+> You can backup an existing `ama-metrics-settings-config` ConfigMap if it exists by running `kubectl get configmap ama-metrics-settings-configmap -n kube-system -o yaml > ama-metrics-settings-configmap-backup.yaml`
+
+The following configuration sets the `podannotationnamespaceregex` parameter to `.*` to scrape all namespaces.
+
+```bash
+kubectl apply -f - <<EOF
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: ama-metrics-settings-configmap
+ namespace: kube-system
+data:
+ schema-version:
+ #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent.
+ v1
+ config-version:
+ #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated)
+ ver1
+ prometheus-collector-settings: |-
+ cluster_alias = ""
+ default-scrape-settings-enabled: |-
+ kubelet = true
+ coredns = false
+ cadvisor = true
+ kubeproxy = false
+ apiserver = false
+ kubestate = true
+ nodeexporter = true
+ windowsexporter = false
+ windowskubeproxy = false
+ kappiebasic = true
+ prometheuscollectorhealth = false
+ # Regex for which namespaces to scrape through pod annotation based scraping.
+ # This is none by default. Use '.*' to scrape all namespaces of annotated pods.
+ pod-annotation-based-scraping: |-
+ podannotationnamespaceregex = ".*"
+ default-targets-metrics-keep-list: |-
+ kubelet = ""
+ coredns = ""
+ cadvisor = ""
+ kubeproxy = ""
+ apiserver = ""
+ kubestate = ""
+ nodeexporter = ""
+ windowsexporter = ""
+ windowskubeproxy = ""
+ podannotations = ""
+ kappiebasic = ""
+ minimalingestionprofile = true
+ default-targets-scrape-interval-settings: |-
+ kubelet = "30s"
+ coredns = "30s"
+ cadvisor = "30s"
+ kubeproxy = "30s"
+ apiserver = "30s"
+ kubestate = "30s"
+ nodeexporter = "30s"
+ windowsexporter = "30s"
+ windowskubeproxy = "30s"
+ kappiebasic = "30s"
+ prometheuscollectorhealth = "30s"
+ podannotations = "30s"
+ debug-mode: |-
+ enabled = false
+EOF
+```
+
+In a few minutes, the `ama-metrics` pods in the `kube-system` namespace should restart and pick up the new configuration.
+
+## Review visualization of metrics in Azure Managed Grafana
+
+Now that you have Azure Monitor managed service for Prometheus and Azure Managed Grafana configured, you should [access your Managed Grafana instance][access-grafana].
+
+There are two [official ingress-nginx dashboards](https://github.com/kubernetes/ingress-nginx/tree/main/deploy/grafana/dashboards) dashboards that you can download and import into your Grafana instance:
+
+- Ingress-nginx controller dashboard
+- Request handling performance dashboard
+
+### Ingress-nginx controller dashboard
+
+This dashboard gives you visibility of request volume, connections, success rates, config reloads and configs out of sync. You can also use it to view the network IO pressure, memory and CPU use of the ingress controller. Finally, it also shows the P50, P95, and P99 percentile response times of your ingresses and their throughput.
+
+You can download this dashboard from [GitHub][grafana-nginx-dashboard].
+
+![Screenshot of a browser showing the ingress-nginx dashboard on Grafana.](media/app-routing/grafana-dashboard.png)
+
+### Request handling performance dashboard
+
+This dashboard gives you visibility into the request handling performance of the different ingress upstream destinations, which are your applications' endpoints that the ingress controller is forwarding traffic to. It shows the P50, P95 and P99 percentile of total request and upstream response times. You can also view aggregates of request errors and latency. Use this dashboard to review and improve the performance and scalability of your applications.
+
+You can download this dashboard from [GitHub][grafana-nginx-request-performance-dashboard].
+
+![Screenshot of a browser showing the ingress-nginx request handling performance dashboard on Grafana.](media/app-routing/grafana-dashboard-2.png)
+
+### Importing a dashboard
+
+To import a Grafana dashboard, expand the left menu and click on **Import** under Dashboards.
+
+![Screenshot of a browser showing the Grafana instance with Import dashboard highlighted.](media/app-routing/grafana-import.png)
+
+Then upload the desired dashboard file and click on **Load**.
+
+![Screenshot of a browser showing the Grafana instance import dashboard dialog.](media/app-routing/grafana-import-json.png)
+## Next steps
+
+- You can configure scaling your workloads using ingress metrics scraped with Prometheus using [Kubernetes Event Driven Autoscaler (KEDA)][KEDA]. Learn more about [integrating KEDA with AKS][keda-prometheus].
+- Create and run a load test with [Azure Load Testing][azure-load-testing] to test workload performance and optimize the scalability of your applications.
+
+<!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[app-routing]: /azure/aks/app-routing
+[managed-prometheus]: /azure/azure-monitor/essentials/prometheus-metrics-overview
+[managed-prometheus-configure]: /azure/azure-monitor/essentials/prometheus-metrics-enable?tabs=cli
+[managed-prometheus-custom-annotations]: /azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration#pod-annotation-based-scraping
+[managed-grafana]: /azure/managed-grafana/overview
+[create-grafana]: /azure/managed-grafana/quickstart-managed-grafana-portal
+[access-grafana]: /azure/managed-grafana/quickstart-managed-grafana-portal#access-your-managed-grafana-instance
+[keda]: /azure/aks/keda-about
+[keda-prometheus]: /azure/azure-monitor/essentials/integrate-keda#scalers
+[azure-load-testing]: /azure/load-testing/quickstart-create-and-run-load-test
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons
+[az-aks-disable-addons]: /cli/azure/aks#az-aks-disable-addons
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create
+[az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az_keyvault_certificate_import
+[az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az_keyvault_certificate_show
+[az-network-dns-zone-create]: /cli/azure/network/dns/zone#az_network_dns_zone_create
+[az-network-dns-zone-show]: /cli/azure/network/dns/zone#az_network_dns_zone_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-aks-addon-update]: /cli/azure/aks/addon#az_aks_addon_update
+[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
+
+<!-- LINKS - external -->
+[osm-release]: https://github.com/openservicemesh/osm/releases/
+[nginx]: https://kubernetes.github.io/ingress-nginx/
+[external-dns]: https://github.com/kubernetes-incubator/external-dns
+[kubectl]: https://kubernetes.io/docs/reference/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[grafana-nginx-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/request-handling-performance.json
+[grafana-nginx-request-performance-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/request-handling-performance.json
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
+
+ Title: Use the application routing add-on with Azure Kubernetes Service (AKS) clusters (preview)
+description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
++++ Last updated : 05/04/2023+++
+# Use the application routing add-on with Azure Kubernetes Service (AKS) clusters (preview)
+
+The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. It can optionally integrate with Open Service Mesh (OSM) for end-to-end encryption of inter-cluster communication using mutual TLS (mTLS). When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
++
+## Application routing add-on overview
+
+The application routing add-on deploys the following components:
+
+- **[nginx ingress controller][nginx]**: This ingress controller is exposed to the internet.
+- **[external-dns controller][external-dns]**: This controller watches for Kubernetes ingress resources and creates DNS `A` records in the cluster-specific DNS zone. This is only deployed when you pass in the `--dns-zone-resource-id` argument.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- An Azure Key Vault to store certificates.
+- The `aks-preview` Azure CLI extension version 0.5.137 or later installed. If you need to install or update, see [Install or update the `aks-preview` extension](#install-or-update-the-aks-preview-azure-cli-extension).
+- Optionally, a DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md).
+
+### Install or update the `aks-preview` Azure CLI extension
+
+- Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+- If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+### Create and export a self-signed SSL certificate
+
+> [!NOTE]
+> If you already have an SSL certificate, you can skip this step.
+
+1. Create a self-signed SSL certificate to use with the ingress using the `openssl req` command. Make sure you replace *`<Hostname>`* with the DNS name you're using.
+
+ ```bash
+ openssl req -new -x509 -nodes -out aks-ingress-tls.crt -keyout aks-ingress-tls.key -subj "/CN=<Hostname>" -addext "subjectAltName=DNS:<Hostname>"
+ ```
+
+2. Export the SSL certificate and skip the password prompt using the `openssl pkcs12 -export` command.
+
+ ```bash
+ openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out aks-ingress-tls.pfx
+ ```
+
+### Create an Azure Key Vault to store the certificate
+
+> [!NOTE]
+> If you already have an Azure Key Vault, you can skip this step.
+
+- Create an Azure Key Vault using the [`az keyvault create`][az-keyvault-create] command.
+
+ ```azurecli-interactive
+ az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName>
+ ```
+
+### Import certificate into Azure Key Vault
+
+- Import the SSL certificate into Azure Key Vault using the [`az keyvault certificate import`][az-keyvault-certificate-import] command. If your certificate is password protected, you can pass the password through the `--password` flag.
+
+ ```azurecli-interactive
+ az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx [--password <certificate password if specified>]
+ ```
+
+### Create an Azure DNS zone
+
+> [!NOTE]
+> If you want the add-on to automatically manage creating host names via Azure DNS, you need to [create an Azure DNS zone](../dns/dns-getstarted-cli.md) if you don't have one already.
+
+- Create an Azure DNS zone using the [`az network dns zone create`][az-network-dns-zone-create] command.
+
+ ```azurecli-interactive
+ az network dns zone create -g <ResourceGroupName> -n <ZoneName>
+ ```
+
+## Enable application routing using Azure CLI
+
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+The following extra add-on is required:
+
+- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
+
+### Enable application routing on a new cluster
+
+- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
+ ```
+
+### Enable application routing on an existing cluster
+
+- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
+ ```
+
+# [With Open Service Mesh (OSM)](#tab/with-osm)
+
+The following extra add-ons are required:
+
+- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+- **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
+
+### Enable application routing on a new cluster
+
+- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --generate-ssh-keys --enable-secret-rotation
+ ```
+
+### Enable application routing on an existing cluster
+
+- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --enable-secret-rotation
+ ```
+
+> [!NOTE]
+> To use the add-on with Open Service Mesh, you should install the `osm` command-line tool. This command-line tool contains everything needed to configure and manage Open Service Mesh. The latest binaries are available on the [OSM GitHub releases page][osm-release].
+
+# [With service annotations (retired)](#tab/service-annotations)
+
+> [!WARNING]
+> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
+
+The following extra add-on is required:
+
+- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
+
+### Enable application routing on a new cluster
+
+- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
+ ```
+
+### Enable application routing on an existing cluster
+
+- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
+
+ ```azurecli-interactive
+ az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
+ ```
+++
+## Retrieve the add-on's managed identity object ID
+
+You use the managed identity in the next steps to grant permissions to manage the Azure DNS zone and retrieve secrets and certificates from the Azure Key Vault.
+
+- Get the add-on's managed identity object ID using the [`az aks show`][az-aks-show] command and setting the output to a variable named *MANAGEDIDENTITY_OBJECTID*.
+
+ ```azurecli-interactive
+ # Provide values for your environment
+ RGNAME=<ResourceGroupName>
+ CLUSTERNAME=<ClusterName>
+ MANAGEDIDENTITY_OBJECTID=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query ingressProfile.webAppRouting.identity.objectId -o tsv)
+ ```
+
+## Configure the add-on to use Azure DNS to manage DNS zones
+
+> [!NOTE]
+> If you plan to use Azure DNS, you need to update the add-on to pass in the `--dns-zone-resource-id`.
+
+1. Retrieve the resource ID for the DNS zone using the [`az network dns zone show`][az-network-dns-zone-show] command and setting the output to a variable named *ZONEID*.
+
+ ```azurecli-interactive
+ ZONEID=$(az network dns zone show -g <ResourceGroupName> -n <ZoneName> --query "id" --output tsv)
+ ```
+
+2. Grant **DNS Zone Contributor** permissions on the DNS zone using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create --role "DNS Zone Contributor" --assignee $MANAGEDIDENTITY_OBJECTID --scope $ZONEID
+ ```
+
+3. Update the add-on to enable the integration with Azure DNS and install the **external-dns** controller using the [`az aks addon update`][az-aks-addon-update] command.
+
+ ```azurecli-interactive
+ az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_application_routing --dns-zone-resource-id=$ZONEID
+ ```
+
+## Grant the add-on permissions to retrieve certificates from Azure Key Vault
+
+The application routing add-on creates a user-created managed identity in the cluster resource group. You need to grant permissions to the managed identity so it can retrieve SSL certificates from the Azure Key Vault.
+
+Azure Key Vault offers [two authorization systems](../key-vault/general/rbac-access-policy.md): **Azure role-based access control (Azure RBAC)**, which operates on the management plane, and the **access policy model**, which operates on both the management plane and the data plane. To find out which system your key vault is using, you can query the `enableRbacAuthorization` property.
+
+```azurecli-interactive
+az keyvault show --name <KeyVaultName> --query properties.enableRbacAuthorization
+```
+
+If Azure RBAC authorization is enabled for your key vault, you should configure permissions using Azure RBAC. Add the `Key Vault Secrets User` role assignment to the key vault.
+
+```azurecli-interactive
+KEYVAULTID=$(az keyvault show --name <KeyVaultName> --query "id" --output tsv)
+az role assignment create --role "Key Vault Secrets User" --assignee $MANAGEDIDENTITY_OBJECTID --scope $KEYVAULTID
+```
+
+If Azure RBAC authorization is not enabled for your key vault, you should configure permissions using the access policy model. Grant `GET` permissions for the application routing add-on to retrieve certificates from Azure Key Vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
+
+```azurecli-interactive
+az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get
+```
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client. You can install it locally using the [`az aks install-cli`][az-aks-install-cli] command. If you use the Azure Cloud Shell, `kubectl` is already installed.
+
+- Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
+ ```
+
+## Deploy an application
+
+Application routing uses annotations on Kubernetes ingress objects to create the appropriate resources, create records on Azure DNS, and retrieve the SSL certificates from Azure Key Vault.
+
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+### Create the application namespace
+
+- Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+
+ ```bash
+ kubectl create namespace hello-web-app-routing
+ ```
+
+### Create the deployment
+
+- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+ ```
+
+### Create the service
+
+- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+ ```
+
+### Create the ingress
+
+The application routing add-on creates an ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an ingress object with this class, it activates the add-on.
+
+1. Get the certificate URI to use in the ingress from Azure Key Vault using the [`az keyvault certificate show`][az-keyvault-certificate-show] command.
+
+ ```azurecli-interactive
+ az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
+ ```
+
+2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that will be generated to store the certificate. This certificate will be presented in the browser.
+
+ ```yaml
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-aks-helloworld
+ ```
+
+### Create the resources on the cluster
+
+- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f deployment.yaml -n hello-web-app-routing
+ kubectl apply -f service.yaml -n hello-web-app-routing
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resources:
+
+ ```output
+ deployment.apps/aks-helloworld created
+ service/aks-helloworld created
+ ingress.networking.k8s.io/aks-helloworld created
+ ```
+
+# [With Open Service Mesh (OSM)](#tab/with-osm)
+
+### Create the application namespace
+
+1. Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+
+ ```bash
+ kubectl create namespace hello-web-app-routing
+ ```
+
+2. Add the application namespace to the OSM control plane using the `osm namespace add` command.
+
+ ```bash
+ osm namespace add hello-web-app-routing
+ ```
+
+### Create the deployment
+
+- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+ ```
+
+### Create the service
+
+- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+ ```
+
+### Create the ingress
+
+The application routing add-on creates an ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an ingress object with this class, it activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the ingress object creates an Open Service Mesh (OSM) [IngressBackend](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources.
+
+OSM issues a certificate that Nginx uses as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate are stored in a Kubernetes secret that Nginx uses to authenticate service mesh back ends. For more information, see [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/).
+
+1. Get the certificate URI to use in the ingress from Azure Key Vault using the [`az keyvault certificate show`][az-keyvault-certificate-show] command.
+
+ ```azurecli-interactive
+ az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
+ ```
+
+2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that will be generated to store the certificate. This certificate will be presented in the browser.
+
+ ```yaml
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ kubernetes.azure.com/use-osm-mtls: "true"
+ nginx.ingress.kubernetes.io/backend-protocol: HTTPS
+ nginx.ingress.kubernetes.io/configuration-snippet: |2-
+
+ proxy_ssl_name "default.hello-web-app-routing.cluster.local";
+ nginx.ingress.kubernetes.io/proxy-ssl-secret: kube-system/osm-ingress-client-cert
+ nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-aks-helloworld
+ ```
+
+### Create the resources on the cluster
+
+- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f deployment.yaml -n hello-web-app-routing
+ kubectl apply -f service.yaml -n hello-web-app-routing
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resources:
+
+ ```output
+ deployment.apps/aks-helloworld created
+ service/aks-helloworld created
+ ingress.networking.k8s.io/aks-helloworld created
+ ```
+
+# [With service annotations (retired)](#tab/service-annotations)
+
+> [!WARNING]
+> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
+
+### Create the application namespace
+
+- Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+
+ ```bash
+ kubectl create namespace hello-web-app-routing
+ ```
+
+### Create the deployment
+
+- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+ ```
+
+### Create the service with the annotations (retired)
+
+- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. This certificate will be presented in the browser.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ kubernetes.azure.com/ingress-host: <Hostname>
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+ ```
+
+### Create the resources on the cluster
+
+- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f deployment.yaml -n hello-web-app-routing
+ kubectl apply -f service.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resources:
+
+ ```output
+ deployment.apps/aks-helloworld created
+ service/aks-helloworld created
+ ```
+++
+## Verify the managed ingress was created
+
+- Verify the managed ingress was created using the `kubectl get ingress` command.
+
+ ```bash
+ kubectl get ingress -n hello-web-app-routing
+ ```
+
+ The following example output shows the created managed ingress:
+
+ ```output
+ NAME CLASS HOSTS ADDRESS PORTS AGE
+ aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m
+ ```
+
+## Access the endpoint over a DNS hostname
+
+If you haven't configured Azure DNS integration, you need to configure your own DNS provider with an `A` record pointing to the ingress IP address and the host name you configured for the ingress, for example *myapp.contoso.com*.
+
+## Remove the application routing add-on
+
+1. Remove the associated namespace using the `kubectl delete namespace` command.
+
+ ```bash
+ kubectl delete namespace hello-web-app-routing
+ ```
+
+2. Remove the application routing add-on from your cluster using the [`az aks disable-addons`][az-aks-disable-addons] command.
+
+ ```azurecli-interactive
+ az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup
+ ```
+
+When the application routing add-on is disabled, some Kubernetes resources may remain in the cluster. These resources include *configMaps* and *secrets* and are created in the *app-routing-system* namespace. You can remove these resources if you want.
+
+<!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons
+[az-aks-disable-addons]: /cli/azure/aks#az-aks-disable-addons
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create
+[az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az_keyvault_certificate_import
+[az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az_keyvault_certificate_show
+[az-network-dns-zone-create]: /cli/azure/network/dns/zone#az_network_dns_zone_create
+[az-network-dns-zone-show]: /cli/azure/network/dns/zone#az_network_dns_zone_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-aks-addon-update]: /cli/azure/aks/addon#az_aks_addon_update
+[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
+
+<!-- LINKS - external -->
+[osm-release]: https://github.com/openservicemesh/osm/releases/
+[nginx]: https://kubernetes.github.io/ingress-nginx/
+[external-dns]: https://github.com/kubernetes-incubator/external-dns
+[kubectl]: https://kubernetes.io/docs/reference/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
aks Automated Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/automated-deployments.md
Automated deployments simplify the process of setting up a GitHub Action and cre
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE]
-> This feature is not yet available in all regions.
+> Private clusters are currently not supported.
## Prerequisites
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS) description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) clusters to meet application demands. Previously updated : 05/02/2023 Last updated : 07/14/2023 # Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS)
-To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled because of resource constraints. When issues are detected, the number of nodes in a node pool increases to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.
+To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
This article shows you how to enable and manage the cluster autoscaler in an AKS cluster. ## Before you begin
-This article requires that you're running the Azure CLI version 2.0.76 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This article requires Azure CLI version 2.0.76 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
## About the cluster autoscaler
-To adjust to changing application demands, such as between the workday and evening or on a weekend, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways:
+To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways:
* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. * The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. ![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
-Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
+Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Any pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
-If the current node pool size is lower than the specified minimum or greater than the specified maximum when you enable autoscaling, the autoscaler waits to take effect until a new node is needed in the node pool or until a node can be safely deleted from the node pool.
-
-For more information about how scaling down works, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work).
+If the current node pool size is lower than the specified minimum or greater than the specified maximum when you enable autoscaling, the autoscaler waits to take effect until a new node is needed in the node pool or until a node can be safely deleted from the node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations:
The cluster autoscaler may be unable to scale down if pods can't move, such as i
* A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold. * A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
-For more information about how the cluster autoscaler may be unable to scale down, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown].
+For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown]
The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [using the autoscaler profile](#use-the-cluster-autoscaler-profile).
-The cluster and horizontal pod autoscalers can work together and are often both deployed in a cluster. When combined, the horizontal pod autoscaler runs the number of pods required to meet application demand. The cluster autoscaler runs the number of nodes required to support the scheduled pods.
+The cluster autoscaler and horizontal pod autoscaler can work together and are often both deployed in a cluster. When combined, the horizontal pod autoscaler runs the number of pods required to meet application demand, and the cluster autoscaler runs the number of nodes required to support the scheduled pods.
> [!NOTE] > Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster).
You can also configure more granular details of the cluster autoscaler by changi
| scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | 10 minutes | | scale-down-delay-after-delete | How long after node deletion that scale down evaluation resumes | scan-interval | | scale-down-delay-after-failure | How long after scale down failure that scale down evaluation resumes | 3 minutes |
-| scale-down-unneeded-time | How long a node should be unneeded before it is eligible for scale down | 10 minutes |
-| scale-down-unready-time | How long an unready node should be unneeded before it is eligible for scale down | 20 minutes |
+| scale-down-unneeded-time | How long a node should be unneeded before it's eligible for scale down | 10 minutes |
+| scale-down-unready-time | How long an unready node should be unneeded before it's eligible for scale down | 20 minutes |
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | 0.5 | | max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds | | balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
-| skip-nodes-with-local-storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath | false |
-| skip-nodes-with-system-pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
+| skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | false |
+| skip-nodes-with-system-pods | If true, cluster autoscaler doesn't delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
| max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes | | new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds | | max-total-unready-percentage | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations | 45% | | max-node-provision-time | Maximum time the autoscaler waits for a node to be provisioned | 15 minutes |
-| ok-total-unready-count | Number of allowed unready nodes, irrespective of max-total-unready-percentage | 3 nodes |
+| ok-total-unready-count | Number of allowed unready nodes, irrespective of max-total-unready-percentage | Three nodes |
> [!IMPORTANT] > When using the autoscaler profile, keep the following information in mind: > > * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile.
-> * The cluster autoscaler profile requires version *2.11.1* or greater of the Azure CLI. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+> * The cluster autoscaler profile requires Azure CLI version *2.11.1* or later. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### Set the cluster autoscaler profile on a new cluster
You can retrieve logs and status updates from the cluster autoscaler to help dia
Use the following steps to configure logs to be pushed from the cluster autoscaler into Log Analytics:
-1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics. Follow the [instructions here][aks-view-master-logs], and make sure you check the box for `cluster-autoscaler` when selecting options for "Logs".
-2. Select the "Logs" section on your cluster via the Azure portal.
-3. Input the following example query into Log Analytics:
+1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics using the [instructions here][aks-view-master-logs]. Make sure you check the box for `cluster-autoscaler` when selecting options for **Logs**.
+2. Select the **Log** section on your cluster.
+3. Enter the following example query into Log Analytics:
```kusto AzureDiagnostics | where Category == "cluster-autoscaler" ```
- As long as there are logs to retrieve, you should see logs similar to the following:
+ As long as there are logs to retrieve, you should see logs similar to the following logs:
![Log Analytics logs](media/autoscaler/autoscaler-logs.png)
-The cluster autoscaler also writes out the health status to a `configmap` named `cluster-autoscaler-status`. You can retrieve these logs using the following `kubectl` command:
+ The cluster autoscaler also writes out the health status to a `configmap` named `cluster-autoscaler-status`. You can retrieve these logs using the following `kubectl` command:
-```bash
-kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml
-```
+ ```bash
+ kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml
+ ```
-To learn more about the autoscaler logs, read the FAQ on the [Kubernetes/autoscaler GitHub project][kubernetes-faq].
+To learn more about the autoscaler logs, see the [Kubernetes/autoscaler GitHub project FAQ][kubernetes-faq].
## Use the cluster autoscaler with node pools ### Use the cluster autoscaler with multiple node pools enabled
-You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-pools] enabled. When using both features together, you enable the cluster autoscaler on each individual node pool in the cluster and can pass unique autoscaling rules to each.
+You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-pools] enabled. When using both features together, you can enable the cluster autoscaler on each individual node pool in the cluster and pass unique autoscaling rules to each node pool.
-* Update an existing node pool's settings using the [`az aks nodepool update`][az-aks-nodepool-update] command. The following command continues from the [previous steps](#enable-the-cluster-autoscaler-on-a-new-cluster) in this article:
+* Update the settings on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command. The following command continues from the [previous steps](#enable-the-cluster-autoscaler-on-a-new-cluster) in this article:
```azurecli-interactive az aks nodepool update \
You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-
### Re-enable the cluster autoscaler on a node pool
-Re-enable the cluster autoscaler on a node pool using the [az aks nodepool update][az-aks-nodepool-update] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
+* Re-enable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
-> [!NOTE]
-> If you plan on using the cluster autoscaler with node pools that span multiple zones and leverage scheduling features related to zones such as volume topological scheduling, we recommend that you have one node pool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This ensure the autoscaler can successfully scale up and keep the sizes of the node pools balanced.
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name nodepool1 \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 5
+ ```
+
+ > [!NOTE]
+ > If you plan on using the cluster autoscaler with node pools that span multiple zones and leverage scheduling features related to zones, such as volume topological scheduling, we recommend you have one node pool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This ensures the autoscaler can successfully scale up and keep the sizes of the node pools balanced.
## Configure the horizontal pod autoscaler
-Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is used to provide resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more details on using the horizontal pod autoscaler, see the [HorizontalPodAutoscaler Walkthrough][kubernetes-hpa-walkthrough].
+Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] provides resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more information on using the horizontal pod autoscaler, see the [HorizontalPodAutoscaler walkthrough][kubernetes-hpa-walkthrough].
## Next steps
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Title: Integrate Azure Container Registry with Azure Kubernetes Service
-description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR)
+ Title: Integrate Azure Container Registry with Azure Kubernetes Service (AKS)
+description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR).
Previously updated : 03/13/2023 Last updated : 07/12/2023 ms.tool: azure-cli, azure-powershell ms.devlang: azurecli
-# Authenticate with Azure Container Registry from Azure Kubernetes Service
+# Authenticate with Azure Container Registry (ACR) from Azure Kubernetes Service (AKS)
-When using [Azure Container Registry (ACR)][acr-intro] with Azure Kubernetes Service (AKS), you need to establish an authentication mechanism. Configuring the required permissions between ACR and AKS can be accomplished using the Azure CLI, Azure PowerShell, and Azure portal. This article provides examples to configure authentication between these Azure services using the Azure CLI or Azure PowerShell.
+When using [Azure Container Registry (ACR)][acr-intro] with Azure Kubernetes Service (AKS), you need to establish an authentication mechanism. You can configure the required permissions between ACR and AKS using the Azure CLI, Azure PowerShell, or Azure portal. This article provides examples to configure authentication between these Azure services using the Azure CLI or Azure PowerShell.
The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with the agent pool in your AKS cluster. For more information on AKS managed identities, see [Summary of managed identities][summary-msi]. > [!IMPORTANT]
-> There is a latency issue with Azure Active Directory groups when attaching ACR. If the **AcrPull** role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there may be a delay before the RBAC group takes effect. If you are running automation that requires the RBAC configuration to be complete, we recommended you use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
+> There's a latency issue with Azure Active Directory groups when attaching ACR. If the **AcrPull** role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there may be a delay before the RBAC group takes effect. If you're running automation that requires the RBAC configuration to be complete, we recommend you use [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
> [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret]. ## Before you begin
-* You need to have the [**Owner**][rbac-owner], [**Azure account administrator**][rbac-classic], or [**Azure co-administrator**][rbac-classic] role on your **Azure subscription**.
+* You need the [**Owner**][rbac-owner], [**Azure account administrator**][rbac-classic], or [**Azure co-administrator**][rbac-classic] role on your Azure subscription.
* To avoid needing one of these roles, you can instead use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an ACR](../container-registry/container-registry-authentication-managed-identity.md). * If you're using Azure CLI, this article requires that you're running Azure CLI version 2.7.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. * If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. * Examples and syntax to use Terraform for configuring ACR can be found in the [Terraform reference][terraform-reference].
-## Create a new AKS cluster with ACR integration
-
-You can set up AKS and ACR integration during the creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure AD managed identity is used.
-
-### Create an ACR
-
-If you don't already have an ACR, create one using the following command.
+## Create a new ACR
#### [Azure CLI](#tab/azure-cli)
-```azurecli
-# Set this variable to the name of your ACR. The name must be globally unique.
-# Connected registry name must use only lowercase
+* If you don't already have an ACR, create one using the [`az acr create`][az-acr-create] command. The following example sets the `MYACR` variable to the name of the ACR, *mycontainerregistry*, and uses the variable to create the registry. Your ACR name must be globally unique and use only lowercase letters.
-MYACR=mycontainerregistry
+ ```azurecli-interactive
+ MYACR=mycontainerregistry
-az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
-```
+ az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
+ ```
#### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-# Set this variable to the name of your ACR. The name must be globally unique.
-# Connected registry name must use only lowercase
+* If you don't already have an ACR, create one using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet. The following example sets the `MYACR` variable to the name of the ACR, *mycontainerregistry*, and uses the variable to create the registry. Your ACR name must be globally unique and use only lowercase letters.
-$MYACR = 'mycontainerregistry'
+ ```azurepowershell-interactive
+ $MYACR = 'mycontainerregistry'
-New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
-```
+ New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+ ```
-### Create a new AKS cluster and integrate with an existing ACR
-
-If you already have an ACR, use the following command to create a new AKS cluster with ACR integration. This command allows you to authorize an existing ACR in your subscription and configures the appropriate **AcrPull** role for the managed identity. Supply valid values for your parameters below.
+## Create a new AKS cluster and integrate with an existing ACR
#### [Azure CLI](#tab/azure-cli)
-```azurecli
-# Set this variable to the name of your ACR. The name must be globally unique.
-# Connected registry name must use only lowercase
-
-MYACR=mycontainerregistry
-
-# Create an AKS cluster with ACR integration.
+* Create a new AKS cluster and integrate with an existing ACR using the [`az aks create`][az-aks-create] command with the [`--attach-acr` parameter][cli-param]. This command allows you to authorize an existing ACR in your subscription and configures the appropriate **AcrPull** role for the managed identity.
-az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR
-```
+ ```azurecli-interactive
+ MYACR=mycontainerregistry
-Alternatively, you can specify the ACR name using an ACR resource ID using the following format:
+ az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR
+ ```
-`/subscriptions/\<subscription-id\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.ContainerRegistry/registries/\<name\>`
+ This command may take several minutes to complete.
-> [!NOTE]
-> If you're using an ACR located in a different subscription from your AKS cluster, use the ACR *resource ID* when attaching or detaching from the cluster.
->
-> ```azurecli
-> az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
-> ```
-
-This command may take several minutes to complete.
+ > [!NOTE]
+ > If you're using an ACR located in a different subscription from your AKS cluster or would prefer to use the ACR *resource ID* instead of the ACR name, you can do so using the following syntax:
+ >
+ > ```azurecli
+ > az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
+ > ```
#### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-# Set this variable to the name of your ACR. The name must be globally unique.
-# Connected registry name must use only lowercase
-
-$MYACR = 'mycontainerregistry'
+* Create a new AKS cluster and integrate with an existing ACR using the [`New-AzAksCluster`][new-azakscluster] cmdlet with the [`-AcrNameToAttach` parameter][ps-attach] parameter. This command allows you to authorize an existing ACR in your subscription and configures the appropriate **AcrPull** role for the managed identity.
-# Create an AKS cluster with ACR integration.
+ ```azurepowershell-interactive
+ $MYACR = 'mycontainerregistry'
-New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR
-```
+ New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR
+ ```
-This command may take several minutes to complete.
+ This command may take several minutes to complete.
-## Configure ACR integration for existing AKS clusters
+## Configure ACR integration for an existing AKS cluster
-### Attach an ACR to an AKS cluster
+### Attach an ACR to an existing AKS cluster
#### [Azure CLI](#tab/azure-cli)
-Integrate an existing ACR with an existing AKS cluster using the [`--attach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
+* Integrate an existing ACR with an existing AKS cluster using the [`az aks update`][az-aks-update] command with the [`--attach-acr` parameter][cli-param] and a valid value for **acr-name** or **acr-resource-id**.
-```azurecli
-# Attach using acr-name
-az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
+ ```azurecli-interactive
+ # Attach using acr-name
+ az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
-# Attach using acr-resource-id
-az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id>
-```
+ # Attach using acr-resource-id
+ az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id>
+ ```
-> [!NOTE]
-> The `az aks update --attach-acr` command uses the permissions of the user running the command to create the ACR role assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
+ > [!NOTE]
+ > The `az aks update --attach-acr` command uses the permissions of the user running the command to create the ACR role assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
#### [Azure PowerShell](#tab/azure-powershell)
-Integrate an existing ACR with an existing AKS cluster using the [`-AcrNameToAttach` parameter][ps-attach] and valid values for **acr-name**.
+* Integrate an existing ACR with an existing AKS cluster using the [`Set-AzAksCluster`][set-azakscluster] command with the [`-AcrNameToAttach` parameter][ps-attach] and a valid value for **acr-name**.
-```azurepowershell
-Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
-```
+ ```azurepowershell-interactive
+ Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
+ ```
-> [!NOTE]
-> Running the `Set-AzAksCluster -AcrNameToAttach` cmdlet uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
+ > [!NOTE]
+ > Running the `Set-AzAksCluster -AcrNameToAttach` cmdlet uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameT
#### [Azure CLI](#tab/azure-cli)
-Remove the integration between an ACR and an AKS cluster using the [`--detach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
+* Remove the integration between an ACR and an AKS cluster using the [`az aks update`][az-aks-update] command with the [`--detach-acr` parameter][cli-param] and a valid value for **acr-name** or **acr-resource-id**.
-```azurecli
-# Detach using acr-name
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
+ ```azurecli-interactive
+ # Detach using acr-name
+ az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
-# Detach using acr-resource-id
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
-```
+ # Detach using acr-resource-id
+ az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
+ ```
#### [Azure PowerShell](#tab/azure-powershell)
-Remove the integration between an ACR and an AKS cluster using the [`-AcrNameToDetach` parameter][ps-detach] and valid values for **acr-name**.
+* Remove the integration between an ACR and an AKS cluster using the [`Set-AzAksCluster`][set-azakscluster] command with the [`-AcrNameToDetach` parameter][ps-detach] and a valid value for **acr-name**.
-```azurepowershell
-Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToDetach <acr-name>
-```
+ ```azurepowershell-interactive
+ Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToDetach <acr-name>
+ ```
Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameT
### Import an image into your ACR
-Run the following command to import an image from Docker Hub into your ACR.
- #### [Azure CLI](#tab/azure-cli)
-```azurecli
-az acr import -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1
-```
+* Import an image from Docker Hub into your ACR using the [`az acr import`][az-acr-import] command.
+
+ ```azurecli-interactive
+ az acr import -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1
+ ```
#### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/nginx:latest
-```
+* Import an image from Docker Hub into your ACR using the [`Import-AzContainerRegistryImage`] cmdlet.
+
+ ```azurepowershell-interactive
+ Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/nginx:latest
+ ```
### Deploy the sample image from ACR to AKS
-Ensure you have the proper AKS credentials.
- #### [Azure CLI](#tab/azure-cli)
-```azurecli
-az aks get-credentials -g myResourceGroup -n myAKSCluster
-```
-
-#### [Azure PowerShell](#tab/azure-powershell)
+1. Ensure you have the proper AKS credentials using the [`az aks get-credentials`][az-aks-get-credentials] command.
-```azurepowershell
-Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-```
+ ```azurecli-interactive
+ az aks get-credentials -g myResourceGroup -n myAKSCluster
+ ```
-
+2. Create a file called **acr-nginx.yaml** using the following sample YAML and replace **acr-name** with the name of your ACR.
-Create a file called **acr-nginx.yaml** using the sample YAML below. Replace **acr-name** with the name of your ACR.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: nginx0-deployment
- labels:
- app: nginx0-deployment
-spec:
- replicas: 2
- selector:
- matchLabels:
- app: nginx0
- template:
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
+ name: nginx0-deployment
labels:
- app: nginx0
+ app: nginx0-deployment
spec:
- containers:
- - name: nginx
- image: <acr-name>.azurecr.io/nginx:v1
- ports:
- - containerPort: 80
-```
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx0
+ template:
+ metadata:
+ labels:
+ app: nginx0
+ spec:
+ containers:
+ - name: nginx
+ image: <acr-name>.azurecr.io/nginx:v1
+ ports:
+ - containerPort: 80
+ ```
+
+3. Run the deployment in your AKS cluster using the `kubectl apply` command.
+
+ ```console
+ kubectl apply -f acr-nginx.yaml
+ ```
+
+4. Monitor the deployment using the `kubectl get pods` command.
+
+ ```console
+ kubectl get pods
+ ```
+
+ The output should show two running pods, as shown in the following example output:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ nginx0-deployment-669dfc4d4b-x74kr 1/1 Running 0 20s
+ nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
+ ```
-After creating the file, run the following deployment in your AKS cluster.
+#### [Azure PowerShell](#tab/azure-powershell)
-```console
-kubectl apply -f acr-nginx.yaml
-```
+1. Ensure you have the proper AKS credentials using the [`Import-AzAksCredential`][import-azakscredential] cmdlet.
-You can monitor the deployment by running `kubectl get pods`.
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
-```console
-kubectl get pods
-```
+2. Create a file called **acr-nginx.yaml** using the following sample YAML and replace **acr-name** with the name of your ACR.
-The output should show two running pods.
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: nginx0-deployment
+ labels:
+ app: nginx0-deployment
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx0
+ template:
+ metadata:
+ labels:
+ app: nginx0
+ spec:
+ containers:
+ - name: nginx
+ image: <acr-name>.azurecr.io/nginx:v1
+ ports:
+ - containerPort: 80
+ ```
+
+3. Run the deployment in your AKS cluster using the `kubectl apply` command.
+
+ ```console
+ kubectl apply -f acr-nginx.yaml
+ ```
+
+4. Monitor the deployment using the `kubectl get pods` command.
+
+ ```console
+ kubectl get pods
+ ```
+
+ The output should show two running pods, as shown in the following example output:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ nginx0-deployment-669dfc4d4b-x74kr 1/1 Running 0 20s
+ nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
+ ```
-```output
-NAME READY STATUS RESTARTS AGE
-nginx0-deployment-669dfc4d4b-x74kr 1/1 Running 0 20s
-nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
-```
+ ### Troubleshooting
-* Run the [`az aks check-acr`](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
+* Validate the registry is accessible from the AKS cluster using the [`az aks check-acr`](/cli/azure/aks#az-aks-check-acr) command.
* Learn more about [ACR monitoring](../container-registry/monitor-service.md). * Learn more about [ACR health](../container-registry/container-registry-check-health.md). <!-- LINKS - external -->
-[AKS AKS CLI]: /cli/azure/aks#az_aks_create
[byo-kubelet-identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [image-pull-secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ [summary-msi]: use-managed-identity.md#summary-of-managed-identities
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
[ps-detach]: /powershell/module/az.aks/set-azakscluster#-acrnametodetach [cli-param]: /cli/azure/aks#az-aks-update-optional-parameters [ps-attach]: /powershell/module/az.aks/set-azakscluster#-acrnametoattach
-[byo-kubelet-identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity
[terraform-reference]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/container_registry
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-acr-create]: /cli/azure/acr#az-acr-create
+[new-azcontainerregistry]: /powershell/module/az.containerregistry/new-azcontainerregistry
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites.
kubectl get configmaps --namespace=kube-system coredns-custom -o yaml ```
-4. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] command and the `kube-dns` label. This command deletes the `kube-dns` pods, and then the Kubernetes Scheduler recreates them. The new pods contain the change in TTL value.
+4. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
```console
- kubectl delete pod --namespace kube-system -l k8s-app=kube-dns
+ kubectl -n kube-system rollout restart deployment coredns
``` ## Custom forward server
If you need to specify a forward server for your network traffic, you can create
kubectl apply -f corednsms.yaml ```
-3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them.
+3. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
```console
- kubectl delete pod --namespace kube-system -l k8s-app=kube-dns
+ kubectl -n kube-system rollout restart deployment coredns
``` ## Use custom domains
You may want to configure custom domains that can only be resolved internally. F
kubectl apply -f corednsms.yaml ```
-3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them.
+3. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
```console
- kubectl delete pod --namespace kube-system -l k8s-app=kube-dns
+ kubectl -n kube-system rollout restart deployment coredns
``` ## Stub domains
CoreDNS can also be used to configure stub domains.
kubectl apply -f corednsms.yaml ```
-3. Force CoreDNS to reload the ConfigMap using the [`kubectl delete pod`][kubectl delete] so the Kubernetes Scheduler can recreate them.
+3. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
```console
- kubectl delete pod --namespace kube-system -l k8s-app=kube-dns
+ kubectl -n kube-system rollout restart deployment coredns
``` ## Hosts plugin
metadata:
kubectl apply -f corednsms.yaml # Force CoreDNS to reload the ConfigMap
- kubectl delete pod --namespace kube-system -l k8s-app=kube-dns
+ kubectl -n kube-system rollout restart deployment coredns
``` 3. View the CoreDNS debug logging using the `kubectl logs` command.
To learn more about core network concepts, see [Network concepts for application
[corednsk8s]: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+[kubectl-rollout]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout
[coredns hosts]: https://coredns.io/plugins/hosts/ [coredns-troubleshooting]: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ [cluster-proportional-autoscaler]: https://github.com/kubernetes-sigs/cluster-proportional-autoscaler
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
After you create your artifacts and set up GitHub OIDC, you can use `draft gener
az aks draft up --destination /Workspaces/ContosoAir ```
-## Use Web Application Routing with Draft to make your application accessible over the internet
+## Use Application Routing with Draft to make your application accessible over the internet
-[Web Application Routing][web-app-routing] is the easiest way to get your web application up and running in Kubernetes securely. Web Application Routing removes the complexity of ingress controllers and certificate and DNS management, and it offers configuration for enterprises looking to bring their own. Web Application Routing offers a managed ingress controller based on nginx that you can use without restrictions and integrates out of the box with Open Service Mesh to secure intra-cluster communications.
+ Application Routing][app-routing] is the easiest way to get your web application up and running in Kubernetes securely. Application Routing removes the complexity of ingress controllers and certificate and DNS management, and it offers configuration for enterprises looking to bring their own. Application Routing offers a managed ingress controller based on nginx that you can use without restrictions and integrates out of the box with Open Service Mesh to secure intra-cluster communications.
-- Set up Draft with Web Application Routing using the [`az aks draft update`][az-aks-draft-update] and pass in the DNS name and Azure Key Vault-stored certificate when prompted.
+- Set up Draft with Application Routing using the [`az aks draft update`][az-aks-draft-update] and pass in the DNS name and Azure Key Vault-stored certificate when prompted.
```azure-cli-interactive az aks draft update
After you create your artifacts and set up GitHub OIDC, you can use `draft gener
<!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
-[web-app-routing]: web-app-routing.md
+[app-routing]: app-routing.md
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-draft-update]: /cli/azure/aks/draft#az-aks-draft-update
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
# HTTP application routing add-on for Azure Kubernetes Service (AKS) > [!CAUTION]
-> The HTTP application routing add-on is in the process of being retired and isn't recommended for production use. We recommend using the [Web Application Routing add-on](./web-app-routing.md) instead.
+> The HTTP application routing add-on is in the process of being retired and isn't recommended for production use. We recommend using the [Application Routing add-on](./app-routing.md) instead.
The HTTP application routing add-on makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster by:
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
This article shows you how to deploy the [NGINX ingress controller][nginx-ingres
## Before you begin * This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you're using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
-* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
* The Kubernetes API health endpoint, `healthz` was deprecated in Kubernetes v1.16. You can replace this endpoint with the `livez` and `readyz` endpoints instead. See [Kubernetes API endpoints for health](https://kubernetes.io/docs/reference/using-api/health-checks/#api-endpoints-for-health) to determine which endpoint to use for your scenario. * If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. * If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
This article included some external components to AKS. To learn more about these
[aks-http-app-routing]: http-application-routing.md [client-source-ip]: concepts-network.md#ingress-controllers [aks-supported versions]: supported-kubernetes-versions.md
-[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr]: cluster-container-registry-integration.md#create-a-new-acr
[acr-helm]: ../container-registry/container-registry-helm-repos.md [azure-powershell-install]: /powershell/azure/install-az-ps
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
You can also:
[client-source-ip]: concepts-network.md#ingress-controllers [install-azure-cli]: /cli/azure/install-azure-cli [aks-supported versions]: supported-kubernetes-versions.md
-[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr]: cluster-container-registry-integration.md#create-a-new-acr
[azure-powershell-install]: /powershell/azure/install-az-ps [acr-helm]: ../container-registry/container-registry-helm-repos.md [get-az-aks-cluster]: /powershell/module/az.aks/get-azakscluster
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
AKS uses the following rules for applying updates to installed add-ons:
| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] | | open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] | | azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |
-| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Web Application Routing Overview][web-app-routing] |
+| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Application Routing Overview][app-routing] |
| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]| ## Extensions
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
For more information about managing Kubernetes application deployments with Helm
<!-- LINKS - internal --> [acr-helm]: ../container-registry/container-registry-helm-repos.md
-[aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr]: cluster-container-registry-integration.md#create-a-new-acr
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
To help simplify steps to configure the identities required, the steps below def
metadata: annotations: azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
- labels:
- azure.workload.identity/use: "true"
name: ${SERVICE_ACCOUNT_NAME} namespace: ${SERVICE_ACCOUNT_NAMESPACE} EOF
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expo
Previously updated : 06/17/2023 Last updated : 07/14/2023 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
az aks create \
> > For more information on SNAT, see [Use SNAT for outbound connections](../load-balancer/load-balancer-outbound-connections.md).
-By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster increases, fewer ports are available per node. To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`.
+By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster increases, fewer ports are available per node.
+
+> [!IMPORTANT]
+> There is a hard limit of 1024 ports regardless of whether front-end IPs are added when the node size is less than 50.
+
+To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`.
```azurecli-interactive NODE_RG=$(az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv)
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM provides the following capabilities and features:
- Configure weighted traffic controls between two or more services for A/B testing or canary deployments. - Collect and view KPIs from application traffic. - Integrate with external certificate management.-- Integrate with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Web Application Routing][web-app-routing].
+- Integrate with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Application Routing][app-routing].
-For more information on ingress and OSM, see [Using ingress to manage external access to services within the cluster][osm-ingress] and [Integrate OSM with Contour for ingress][osm-contour]. For an example of how to integrate OSM with ingress controllers using the `networking.k8s.io/v1` API, see [Ingress with Kubernetes Nginx ingress controller][osm-nginx]. For more information on using Web Application Routing, which automatically integrates with OSM, see [Web Application Routing][web-app-routing].
+For more information on ingress and OSM, see [Using ingress to manage external access to services within the cluster][osm-ingress] and [Integrate OSM with Contour for ingress][osm-contour]. For an example of how to integrate OSM with ingress controllers using the `networking.k8s.io/v1` API, see [Ingress with Kubernetes Nginx ingress controller][osm-nginx]. For more information on using Application Routing, which automatically integrates with OSM, see [Application Routing][app-routing].
## Limitations
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-ingress]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/ [osm-contour]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_contour [osm-nginx]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx
-[web-app-routing]: web-app-routing.md
+[app-routing]: app-routing.md
[istio-about]: istio-about.md
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
The Open Service Mesh (OSM) add-on integrates with features provided by Azure an
Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with one of the following solutions:
-* [Web Application Routing][web-app-routing]
+* [Application Routing][app-routing]
* [NGINX ingress][osm-nginx] * [Contour ingress][osm-contour]
This article covered the Open Service Mesh (OSM) add-on integrations with featur
[osm-hashi-vault]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-hashicorp-vault [osm-cert-manager]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-cert-manager [osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer
-[web-app-routing]: web-app-routing.md
+[app-routing]: app-routing.md
[about-osm-in-aks]: open-service-mesh-about.md
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
Title: Best practices for running Azure Kubernetes Service (AKS) at scale
description: Learn the AKS cluster operator best practices and special considerations for running large clusters at 500 node scale and beyond Previously updated : 10/04/2022 Last updated : 07/14/2023
To increase the node limit beyond 1000, you must have the following pre-requisit
* When using internal Kubernetes services behind an internal load balancer, we recommend creating an internal load balancer or internal service below 750 node scale for optimal scaling performance and load balancer elasticity. > [!NOTE]
-> You can't use [Azure Network Policy Manager (Azure NPM)][azure-npm] with clusters that have more than 500 nodes.
+> [Azure Policy Network Manager (Azure NPM)][azure-npm] doesn't support clusters that have more than 250 nodes, and you can't update a cluster with more than 250 nodes managed by Cluster Autoscaler across all agent pools.
## Node pool scaling considerations and best practices
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
All of the following criteria must be met in order for the stop to occur:
* If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later. * If performed via Azure CLI, the `aks-preview` CLI extension 0.5.134 or later must be installed. * The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
+* Even API usage that is actually watching for deprecated resources is covered here. Look at the [Verb][k8s-api] for the distinction.
### Mitigating stopped upgrade operations
After receiving the error message, you have two options to mitigate the issue. Y
:::image type="content" source="./media/upgrade-cluster/applens-api-detection-inline.png" lightbox="./media/upgrade-cluster/applens-api-detection-full.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
-3. Wait 12 hours from the time the last deprecated API usage was seen.
+3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated api usage to know if it is a [watch][k8s-api].
4. Retry your cluster upgrade.
-You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs.
+You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs. Check the verb in the deprecated api usage to understand, if it is a [watch][k8s-api] use case.
### Bypass validation to ignore API changes
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[release-tracker]: release-tracker.md [specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool [k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
+[k8s-api]: https://kubernetes.io/docs/reference/using-api/api-concepts/
[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs [support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS) Previously updated : 01/05/2023 Last updated : 07/14/2023 # Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)
Azure Network Policy Manager for Linux uses Linux *IPTables* and Azure Network P
| Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. | | Logging | Logs available with **kubectl log -n kube-system \<network-policy-pod\>** command | For more information, see [Calico component logs][calico-logs] |
-## Limitations:
+## Limitations
Azure Network Policy Manager doesn't support IPv6. Otherwise, Azure Network Policy Manager fully supports the network policy spec in Linux.+ * In Windows, Azure Network Policy Manager doesn't support the following: * named ports * SCTP protocol
Azure Network Policy Manager doesn't support IPv6. Otherwise, Azure Network Poli
* "except" CIDR blocks (a CIDR with exceptions) >[!NOTE]
-> * Azure Network Policy Manager pod logs will record an error if an unsupported policy is created.
+> * Azure Network Policy Manager pod logs record an error if an unsupported policy is created.
-## Scale:
+## Scale
-With the current limits set on Azure Network Policy Manager for Linux, it can scale up to 500 Nodes and 40k Pods. You may see OOM kills beyond this scale. Please reach out to us on [aks-acn-github] if you'd like to increase your memory limit.
+With Azure Network Policy Manager for Linux, we don't recommend scaling beyond 250 nodes and 20k pods. If you attempt to scale beyond these limits, you may encounter Out of Memory (OOM) kills. To increase your memory limit, contact us on [aks-acn-github].
## Create an AKS cluster and enable Network Policy
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
Title: Use Key Vault references description: Learn how to set up Azure App Service and Azure Functions to use Azure Key Vault references. Make Key Vault secrets available to your application code.-+ Previously updated : 06/11/2021 Last updated : 07/31/2023
-# Use Key Vault references for App Service and Azure Functions
+# Use Key Vault references as app settings in Azure App Service and Azure Functions
-This topic shows you how to work with secrets from Azure Key Vault in your App Service or Azure Functions application without requiring any code changes. [Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history.
+This article shows you how to use secrets from Azure Key Vault as values of [app settings](configure-common.md#configure-app-settings) or [connection strings](configure-common.md#configure-connection-strings) in your App Service or Azure Functions apps.
-## Granting your app access to Key Vault
+[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. When an app setting or connection string is a key vault reference, your application code can use it like any other app setting or connection string. This way, you can maintain secrets apart from your app's configuration. App settings are securely encrypted at rest, but if you need secret management capabilities, they should go into a key vault.
-In order to read secrets from Key Vault, you need to have a vault created and give your app permission to access it.
+## Grant your app access to a key vault
+
+In order to read secrets from a key vault, you need to have a vault created and give your app permission to access it.
1. Create a key vault by following the [Key Vault quickstart](../key-vault/secrets/quick-create-cli.md). 1. Create a [managed identity](overview-managed-identity.md) for your application.
- Key Vault references will use the app's system assigned identity by default, but you can [specify a user-assigned identity](#access-vaults-with-a-user-assigned-identity).
+ Key vault references use the app's system-assigned identity by default, but you can [specify a user-assigned identity](#access-vaults-with-a-user-assigned-identity).
+
+1. Authorize [read access to secrets your key vault](../key-vault/general/security-features.md#privileged-access) for the managed identity you created earlier. How you do it depends on the permissions model of your key vault:
-1. Create an [access policy in Key Vault](../key-vault/general/security-features.md#privileged-access) for the application identity you created earlier. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity.
+ - **Azure role-based access control**: Assign the **Key Vault Secrets User** role to the managed identity. For instructions, see [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md).
+ - **Vault access policy**: Assign the **Get** secrets permission to the managed identity. For instructions, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md).
### Access network-restricted vaults
-If your vault is configured with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md), you will also need to ensure that the application has network access. Vaults shouldn't depend on the app's public outbound IPs because the origin IP of the secret request could be different. Instead, the vault should be configured to accept traffic from a virtual network used by the app.
+If your vault is configured with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md), ensure that the application has network access. Vaults shouldn't depend on the app's public outbound IPs because the origin IP of the secret request could be different. Instead, the vault should be configured to accept traffic from a virtual network used by the app.
1. Make sure the application has outbound networking capabilities configured, as described in [App Service networking features](./networking-features.md) and [Azure Functions networking options](../azure-functions/functions-networking-options.md).
- Linux applications attempting to use private endpoints additionally require that the app be explicitly configured to have all traffic route through the virtual network. This requirement will be removed in a forthcoming update. To set this, use the following Azure CLI or Azure PowerShell command:
+ Linux applications that connect to private endpoints must be explicitly configured to route all traffic through the virtual network. This requirement will be removed in a forthcoming update. To configure this setting, run the following command:
# [Azure CLI](#tab/azure-cli) ```azurecli
- az webapp config set --subscription <sub> -g MyResourceGroupName -n MyAppName --generic-configurations '{"vnetRouteAllEnabled": true}'
+ az webapp config set --subscription <sub> -g <group-name> -n <app-name> --generic-configurations '{"vnetRouteAllEnabled": true}'
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- Update-AzFunctionAppSetting -Name MyAppName -ResourceGroupName MyResourceGroupName -AppSetting @{vnetRouteAllEnabled = $true}
+ Update-AzFunctionAppSetting -Name <app-name> -ResourceGroupName <group-name> -AppSetting @{vnetRouteAllEnabled = $true}
```
-2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it.
+2. Make sure that the vault's configuration allows the network or subnet that your app uses to access it.
> [!NOTE]
-> Windows container currently does not support Key Vault references over VNet Integration.
+> Windows container currently does not support key vault references over VNet Integration.
### Access vaults with a user-assigned identity
-Some apps need to reference secrets at creation time, when a system-assigned identity would not yet be available. In these cases, a user-assigned identity can be created and given access to the vault in advance.
+Some apps need to reference secrets at creation time, when a system-assigned identity isn't available yet. In these cases, a user-assigned identity can be created and given access to the vault in advance.
Once you have granted permissions to the user-assigned identity, follow these steps: 1. [Assign the identity](./overview-managed-identity.md#add-a-user-assigned-identity) to your application if you haven't already.
-1. Configure the app to use this identity for Key Vault reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity.
+1. Configure the app to use this identity for key vault reference operations by setting the `keyVaultReferenceIdentity` property to the resource ID of the user-assigned identity.
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
- userAssignedIdentityResourceId=$(az identity show -g MyResourceGroupName -n MyUserAssignedIdentityName --query id -o tsv)
- appResourceId=$(az webapp show -g MyResourceGroupName -n MyAppName --query id -o tsv)
- az rest --method PATCH --uri "${appResourceId}?api-version=2021-01-01" --body "{'properties':{'keyVaultReferenceIdentity':'${userAssignedIdentityResourceId}'}}"
+ identityResourceId=$(az identity show --resource-group <group-name> --name <identity-name> --query id -o tsv)
+ az webapp update --resource-group <group-name> --name <app-name> --set keyVaultReferenceIdentity=${identityResourceId}
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
- $userAssignedIdentityResourceId = Get-AzUserAssignedIdentity -ResourceGroupName MyResourceGroupName -Name MyUserAssignedIdentityName | Select-Object -ExpandProperty Id
- $appResourceId = Get-AzFunctionApp -ResourceGroupName MyResourceGroupName -Name MyAppName | Select-Object -ExpandProperty Id
+ $identityResourceId = Get-AzUserAssignedIdentity -ResourceGroupName <group-name> -Name MyUserAssignedIdentityName | Select-Object -ExpandProperty Id
+ $appResourceId = Get-AzFunctionApp -ResourceGroupName <group-name> -Name <app-name> | Select-Object -ExpandProperty Id
$Path = "{0}?api-version=2021-01-01" -f $appResourceId
- Invoke-AzRestMethod -Method PATCH -Path $Path -Payload "{'properties':{'keyVaultReferenceIdentity':'$userAssignedIdentityResourceId'}}"
+ Invoke-AzRestMethod -Method PATCH -Path $Path -Payload "{'properties':{'keyVaultReferenceIdentity':'$identityResourceId'}}"
```
-This configuration will apply to all references for the app.
+This setting applies to all key vault references for the app.
+
+## Rotation
+
+If the secret version isn't specified in the reference, the app uses the latest version that exists in the key vault. When newer versions become available, such as with a rotation event, the app automatically updates and begins using the latest version within 24 hours. The delay is because App Service caches the values of the key vault references and refetches it every 24 hours. Any configuration change to the app causes an app restart and an immediate refetch of all referenced secrets.
-## Reference syntax
+## Source app settings from key vault
-A Key Vault reference is of the form `@Microsoft.KeyVault({referenceString})`, where `{referenceString}` is replaced by one of the following options:
+To use a key vault reference, set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
+
+> [!TIP]
+> Most app settings using key vault references should be marked as slot settings, as you should have separate vaults for each environment.
+
+A key vault reference is of the form `@Microsoft.KeyVault({referenceString})`, where `{referenceString}` is in one of the following formats:
> [!div class="mx-tdBreakAll"] > | Reference string | Description | > |--||
-> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in Key Vault, optionally including a version, e.g., `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931` |
-> | VaultName=_vaultName_;SecretName=_secretName_;SecretVersion=_secretVersion_ | The **VaultName** is required and should the name of your Key Vault resource. The **SecretName** is required and should be the name of the target secret. The **SecretVersion** is optional but if present indicates the version of the secret to use. |
+> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in the vault, optionally including a version, e.g., `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931` |
+> | VaultName=_vaultName_;SecretName=_secretName_;SecretVersion=_secretVersion_ | The **VaultName** is required and is the vault name. The **SecretName** is required and is the secret name. The **SecretVersion** is optional but if present indicates the version of the secret to use. |
-For example, a complete reference would look like the following:
+For example, a complete reference would look like the following string:
``` @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Alternatively:
@Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret) ```
-## Rotation
-
-If a version is not specified in the reference, then the app will use the latest version that exists in the key vault. When newer versions become available, such as with a rotation event, the app will automatically update and begin using the latest version within 24 hours. The delay is because App Service caches the values of the key vault references and refetches it every 24 hours. Any configuration changes to the app that results in a site restart causes an immediate refetch of all referenced secrets.
-
-## Source Application Settings from Key Vault
-
-Key Vault references can be used as values for [Application Settings](configure-common.md#configure-app-settings), allowing you to keep secrets in Key Vault instead of the site config. Application Settings are securely encrypted at rest, but if you need secret management capabilities, they should go into Key Vault.
-
-To use a Key Vault reference for an [app setting](configure-common.md#configure-app-settings), set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
-
-> [!TIP]
-> Most application settings using Key Vault references should be marked as slot settings, as you should have separate vaults for each environment.
- ### Considerations for Azure Files mounting
-Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount [Azure Files](../storage/files/storage-files-introduction.md) as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests which modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it cannot locate or create the content share, the request is blocked.
+Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount [Azure Files](../storage/files/storage-files-introduction.md) as the file system. This setting has validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests that modify these settings, the platform validates if this content share exists, and attempts to create it if not. If it can't locate or create the content share, it blocks the request.
-When using Key Vault references for this setting, this validation check will fail by default, as the secret itself cannot be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This will bypass all checks, and the content share will not be created for you. You should ensure it is created in advance.
+When you use key vault references in this setting, the validation check fails by default, because the secret itself can't be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This setting tells App Service to bypass all checks, and doesn't create the content share for you. You should ensure that it's created in advance.
> [!CAUTION] > If you skip validation and either the connection string or content share are invalid, the app will be unable to start properly and will only serve HTTP 500 errors.
-As part of creating the site, it is also possible that attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate this. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. App Service will use a default file system until Azure Files is set up, and files are not copied over, so you will need to ensure that no deployment attempts occur during the interim period before Azure Files is mounted.
+As part of creating the app, attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate this. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. In this case, App Service uses a default file system until Azure Files is set up, and files aren't copied over. You must ensure that no deployment attempts occur during the interim period before Azure Files is mounted.
### Considerations for Application Insights instrumentation
-Apps can use the `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING` application settings to integrate with [Application Insights](../azure-monitor/app/app-insights-overview.md). The portal experiences for App Service and Azure Functions also use these settings to surface telemetry data from the resource. If these values are referenced from Key Vault, these experiences are not available, and you instead need to work directly with the Application Insights resource to view the telemetry. However, these values are [not considered secrets](../azure-monitor/app/sdk-connection-string.md#is-the-connection-string-a-secret), so you might alternatively consider configuring them directly instead of using the Key Vault references feature.
+Apps can use the `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING` application settings to integrate with [Application Insights](../azure-monitor/app/app-insights-overview.md). The portal experiences for App Service and Azure Functions also use these settings to surface telemetry data from the resource. If these values are referenced from Key Vault, these experiences aren't available, and you instead need to work directly with the Application Insights resource to view the telemetry. However, these values are [not considered secrets](../azure-monitor/app/sdk-connection-string.md#is-the-connection-string-a-secret), so you might alternatively consider configuring them directly instead of using key vault references.
### Azure Resource Manager deployment
-When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Of note, you will need to define your application settings as their own resource, rather than using a `siteConfig` property in the site definition. This is because the site needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
+When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Be sure to define your app settings as their own resource, rather than using a `siteConfig` property in the app definition. This is because the app needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
-An example pseudo-template for a function app might look like the following:
+The following pseudo-template is an example of what a function app might look like:
```json {
An example pseudo-template for a function app might look like the following:
> [!NOTE] > In this example, the source control deployment depends on the application settings. This is normally unsafe behavior, as the app setting update behaves asynchronously. However, because we have included the `WEBSITE_ENABLE_SYNC_UPDATE_SITE` application setting, the update is synchronous. This means that the source control deployment will only begin once the application settings have been fully updated. For more app settings, see [Environment variables and app settings in Azure App Service](reference-app-settings.md).
-## Troubleshooting Key Vault References
+## Troubleshooting key vault references
-If a reference is not resolved properly, the reference value will be used instead. This means that for application settings, an environment variable would be created whose value has the `@Microsoft.KeyVault(...)` syntax. This may cause the application to throw errors, as it was expecting a secret of a certain structure.
+If a reference isn't resolved properly, the reference string is used instead (for example, `@Microsoft.KeyVault(...)`). It may cause the application to throw errors, because it's expecting a secret of a different value.
-Most commonly, this is due to a misconfiguration of the [Key Vault access policy](#granting-your-app-access-to-key-vault). However, it could also be due to a secret no longer existing or a syntax error in the reference itself.
+Failure to resolve is commonly due to a misconfiguration of the [Key Vault access policy](#grant-your-app-access-to-a-key-vault). However, it could also be due to a secret no longer existing or a syntax error in the reference itself.
-If the syntax is correct, you can view other causes for error by checking the current resolution status in the portal. Navigate to Application Settings and select "Edit" for the reference in question. Below the setting configuration, you should see status information, including any errors. The absence of these implies that the reference syntax is invalid.
+If the syntax is correct, you can view other causes for error by checking the current resolution status in the portal. Navigate to Application Settings and select "Edit" for the reference in question. The edit dialog shows status information, including any errors. If you don't see the status message, it means that the syntax is invalid and not recognized as a key vault reference.
You can also use one of the built-in detectors to get additional information.
You can also use one of the built-in detectors to get additional information.
1. In the portal, navigate to your app. 2. Select **Diagnose and solve problems**.
-3. Choose **Availability and Performance** and select **Web app down.**
-4. Find **Key Vault Application Settings Diagnostics** and click **More info**.
-
+3. Choose **Availability and Performance** and select **Web app down**.
+4. In the search box, search for and select **Key Vault Application Settings Diagnostics**.
### Using the detector for Azure Functions 1. In the portal, navigate to your app.
-2. Navigate to **Platform features.**
+2. Navigate to **Platform features**.
3. Select **Diagnose and solve problems**.
-4. Choose **Availability and Performance** and select **Function app down or reporting errors.**
-5. Click on **Key Vault Application Settings Diagnostics.**
+4. Choose **Availability and Performance** and select **Function app down or reporting errors**.
+5. Select **Key Vault Application Settings Diagnostics**.
app-service Configure Encrypt At Rest Using Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-encrypt-at-rest-using-cmk.md
Now you can replace the value of the `WEBSITE_RUN_FROM_PACKAGE` application sett
az keyvault create --name "Contoso-Vault" --resource-group <group-name> --location eastus ```
-1. Follow [these instructions to grant your app access](app-service-key-vault-references.md#granting-your-app-access-to-key-vault) to your key vault:
+1. Follow [these instructions to grant your app access](app-service-key-vault-references.md#grant-your-app-access-to-a-key-vault) to your key vault:
1. Use the following [`az keyvault secret set`](/cli/azure/keyvault/secret#az-keyvault-secret-set) command to add your external URL as a secret in your key vault:
Updating this application setting causes your web app to restart. After the app
## How to rotate the access token
-It is best practice to periodically rotate the SAS key of your storage account. To ensure the web app does not inadvertently loose access, you must also update the SAS URL in Key Vault.
+It is best practice to periodically rotate the SAS key of your storage account. To ensure the web app does not inadvertently lose access, you must also update the SAS URL in Key Vault.
-1. Rotate the SAS key by navigating to your storage account in the Azure portal. Under **Settings** > **Access keys**, click the icon to rotate the SAS key.
+1. Rotate the SAS key by navigating to your storage account in the Azure portal. Under **Settings** > **Access keys**, select the icon to rotate the SAS key.
1. Copy the new SAS URL, and use the following command to set the updated SAS URL in your key vault:
You can revoke the web app's access to the site data by disabling the web app's
Your application files are now encrypted at rest in your storage account. When your web app starts, it retrieves the SAS URL from your key vault. Finally, the web app loads the application files from the storage account.
-If you need to revoke the web app's access to your storage account, you can either revoke access to the key vault or rotate the storage account keys, which invalidates the SAS URL.
+If you need to revoke the web app's access to your storage account, you can either revoke access to the key vault or rotate the storage account keys, both of which invalidate the SAS URL.
## Frequently Asked Questions
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Follow the instructions in the [Secure a custom DNS name with an TLS/SSL binding
[Azure KeyVault](../key-vault/general/overview.md) provides centralized secret management with access policies and audit history. You can store secrets (such as passwords or connection strings) in KeyVault and access these secrets in your application through environment variables.
-First, follow the instructions for [granting your app access to Key Vault](app-service-key-vault-references.md#granting-your-app-access-to-key-vault) and [making a KeyVault reference to your secret in an Application Setting](app-service-key-vault-references.md#reference-syntax). You can validate that the reference resolves to the secret by printing the environment variable while remotely accessing the App Service terminal.
+First, follow the instructions for [granting your app access to a key vault](app-service-key-vault-references.md#grant-your-app-access-to-a-key-vault) and [making a KeyVault reference to your secret in an Application Setting](app-service-key-vault-references.md#source-app-settings-from-key-vault). You can validate that the reference resolves to the secret by printing the environment variable while remotely accessing the App Service terminal.
To inject these secrets in your Spring or Tomcat configuration file, use environment variable injection syntax (`${MY_ENV_VAR}`). For Spring configuration files, see this documentation on [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
|-|-|-| | `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `true` for custom containers. || | `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. ||
+| `WEBSITES_CONTAINER_STOP_TIME_LIMIT` | Amount of time in seconds to wait for the container to terminate gracefully. Deafult is `5`. You can increase to a maximum of `120` ||
| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. | `https://<server-name>.azurecr.io` | | `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. ||
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
The IP address prefix is the subnet's IP address range for the virtual network a
Consult your network engineer to obtain the IP address prefix in CIDR notation. An IP Subnet CIDR calculator may be used to obtain this value.
-## Static configuration
+## Static IP configuration
-Static IP configuration is recommended for Arc resource bridge, because the resource bridge needs three static IPs in the same subnet for the control plane, appliance VM, and reserved appliance VM.
+If deploying Arc resource bridge to a production environment, static configuration must be used when deploying Arc resource bridge. Static IP configuration is used to assign three static IPs (that are in the same subnet) to the Arc resource bridge control plane, appliance VM, and reserved appliance VM.
-If using DHCP, reserve those IP addresses, ensuring the IPs are outside of the assignable DHCP range of IPs (i.e. the control plane IP should be treated as a reserved/static IP that no other machine on the network will use or receive from DHCP). DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+DHCP is only supported in a test environment for testing purposes only for VM management on Azure Stack HCI. DHCP should not be used in a production environment. It is not supported on any other Arc-enabled private cloud, including Arc-enabled VMware, Arc for AVS or Arc-enabled SCVMM. If using DHCP, you must reserve the IP addresses used by the control plane and appliance VM. In addition, these IPs must be outside of the assignable DHCP range of IPs. Ex: The control plane IP should be treated as a reserved/static IP that no other machine on the network will use or receive from DHCP. If the control plane IP or appliance VM IP changes (ex: due to an outage, this impacts the resource bridge availability and functionality.
## Management machine requirements
azure-functions Configure Encrypt At Rest Using Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-encrypt-at-rest-using-cmk.md
Now you can replace the value of the `WEBSITE_RUN_FROM_PACKAGE` application sett
az keyvault create --name "Contoso-Vault" --resource-group <group-name> --location eastus ```
-1. Follow [these instructions to grant your app access](../app-service/app-service-key-vault-references.md#granting-your-app-access-to-key-vault) to your key vault:
+1. Follow [these instructions to grant your app access](../app-service/app-service-key-vault-references.md#grant-your-app-access-to-a-key-vault) to your key vault:
1. Use the following [`az keyvault secret set`](/cli/azure/keyvault/secret#az-keyvault-secret-set) command to add your external URL as a secret in your key vault:
Updating this application setting causes your function app to restart. After the
## How to rotate the access token
-It is best practice to periodically rotate the SAS key of your storage account. To ensure the function app does not inadvertently loose access, you must also update the SAS URL in Key Vault.
+It is best practice to periodically rotate the SAS key of your storage account. To ensure the function app does not inadvertently lose access, you must also update the SAS URL in Key Vault.
-1. Rotate the SAS key by navigating to your storage account in the Azure portal. Under **Settings** > **Access keys**, click the icon to rotate the SAS key.
+1. Rotate the SAS key by navigating to your storage account in the Azure portal. Under **Settings** > **Access keys**, select the icon to rotate the SAS key.
1. Copy the new SAS URL, and use the following command to set the updated SAS URL in your key vault:
You can revoke the function app's access to the site data by disabling the funct
Your application files are now encrypted at rest in your storage account. When your function app starts, it retrieves the SAS URL from your key vault. Finally, the function app loads the application files from the storage account.
-If you need to revoke the function app's access to your storage account, you can either revoke access to the key vault or rotate the storage account keys, which invalidates the SAS URL.
+If you need to revoke the function app's access to your storage account, you can either revoke access to the key vault or rotate the storage account keys, both of which invalidate the SAS URL.
## Frequently Asked Questions
azure-functions Azfd0002 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0002.md
For more information, see [AzureWebJobsStorage](../../functions-app-settings.md#
Update the value of the `AzureWebJobsStorage` app setting on your function app with a valid storage account connection string. ## When to suppress the event
-You should suppress this event when your function app uses an Azure Key Vault reference in the `AzureWebjobsStorage` app setting instead of a connection string. For more information, see [Source application settings from Key Vault](../../../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#source-application-settings-from-key-vault)
+You should suppress this event when your function app uses an Azure Key Vault reference in the `AzureWebjobsStorage` app setting instead of a connection string. For more information, see [Source application settings from Key Vault](../../../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#source-app-settings-from-key-vault)
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
An HTTP trigger function endpoint is created to support the schedule and sequenc
|VirtualMachineRequestExecutor |Queue |This function performs the actual start and stop operation on the VM.| |CreateAutoStopAlertExecutor |Queue |This function gets the payload information from the **AutoStop** function to create the alert on the VM.| |HeartBeatAvailabilityTest |Timer |This function monitors the availability of the primary HTTP functions.|
-|CostAnalyticsFunction |Timer |This function calculates the cost to run the Start/Stop V2 solution on a monthly basis.|
-|SavingsAnalyticsFunction |Timer |This function calculates the total savings achieved by the Start/Stop V2 solution on a monthly basis.|
+|CostAnalyticsFunction |Timer |This function is used by Microsoft to estimate aggregate cost of Start/Stop V2 across customers. This function does not impact the functionality of Start/Stop V2.|
+|SavingsAnalyticsFunction |Timer |This function is used by Microsoft to estimate aggregate savings of Start/Stop V2 across customers. This function does not impact the functionality of Start/Stop V2.|
|VirtualMachineSavingsFunction |Queue |This function performs the actual savings calculation on a VM achieved by the Start/Stop V2 solution.| |TriggerAutoUpdate |Timer |This function starts the auto update process based on the application setting "**EnableAutoUpdate=true**".| |UpdateStartStopV2 |Queue |This function performs the actual auto update execution, which validates your current version with the available version and decides the final action.|
azure-maps How To Use Npm Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-npm-package.md
+
+ Title: How to use the Azure Maps map control npm package
+
+description: Learn how to add maps to node.js applications by using the map control npm package in Azure Maps.
++ Last updated : 07/04/2023++++++
+# Use the azure-maps-control npm package
+
+The [azure-maps-control] npm package is a client-side library that allows you to embed the Azure Maps map control into your node.js applications using JavaScript or TypeScript. This library makes it easy to use the Azure Maps REST services and lets you customize interactive maps with your content and imagery.
+
+## Prerequisites
+
+To use the npm package in an application, you must have the following prerequisites:
+
+* An [Azure Maps account]
+* A [subscription key] or Azure Active Directory (Azure AD) credentials. For more information, see [authentication options].
+
+## Installation
+
+Install the latest [azure-maps-control] package.
+
+```powershell
+npm install azure-maps-control
+```
+
+This package includes a minified version of the source code, CSS Style Sheet, and the TypeScript definitions for the Azure Maps map control.
+
+You would also need to embed the CSS Style Sheet for various controls to display correctly. If you're using a JavaScript bundler to bundle the dependencies and package your code, refer to your bundler's documentation on how it's done. For [Webpack], it's commonly done via a combination of `style-loader` and `css-loader` with documentation available at [style-loader].
+
+To begin, install `style-loader` and `css-loader`:
+
+```powershell
+npm install --save-dev style-loader css-loader
+```
+
+Inside your source file, import _atlas.min.css_:
+
+```js
+import "azure-maps-control/dist/atlas.min.css";
+```
+
+Then add loaders to the module rules portion of the Webpack config:
+
+```js
+module.exports = {
+ module: {
+ rules: [
+ {
+ test: /\.css$/i,
+ use: ["style-loader", "css-loader"]
+ }
+ ]
+ }
+};
+```
+
+Refer to the following section for a complete example.
+
+## Create a map in a node.js application
+
+Embed a map in a web page using the map control npm package.
+
+1. Create a new project
+
+ ```powershell
+ npm init
+ ```
+
+ `npm init` is a command that helps you create a _package.json_ file for your node project. It asks you some questions and generates the file based on your answers. You can also use `-y` or `ΓÇôyes` to skip the questions and use the default values. The _package.json_ file contains information about your project, such as its name, version, dependencies, scripts, etc.
+
+2. Install the latest [azure-maps-control] package.
+
+ ```powershell
+ npm install azure-maps-control
+ ```
+
+3. Install Webpack and other dev dependencies.
+
+ ```powershell
+ npm install --save-dev webpack webpack-cli style-loader css-loader
+ ```
+
+4. Update _package.json_ by adding a new script for `"build": "webpack"`. The file should now look something like the following:
+
+ ```js
+ {
+ "name": "azure-maps-npm-demo",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1",
+ "build": "webpack"
+ },
+ "author": "",
+ "license": "ISC",
+ "dependencies": {
+ "azure-maps-control": "^2.3.1"
+ },
+ "devDependencies": {
+ "css-loader": "^6.8.1",
+ "style-loader": "^3.3.3",
+ "webpack": "^5.88.1",
+ "webpack-cli": "^5.1.4"
+ }
+ }
+ ```
+
+5. Create a Webpack config file named _webpack.config.js_ in the project's root folder. Include these settings in the config file.
+
+ ```js
+ module.exports = {
+ entry: "./src/js/main.js",
+ mode: "development",
+ output: {
+ path: `${__dirname}/dist`,
+ filename: "bundle.js"
+ },
+ module: {
+ rules: [
+ {
+ test: /\.css$/i,
+ use: ["style-loader", "css-loader"]
+ }
+ ]
+ }
+ };
+ ```
+
+6. Add a new JavaScript file at _./src/js/main.js_ with this code.
+
+ ```js
+ import * as atlas from "azure-maps-control";
+ import "azure-maps-control/dist/atlas.min.css";
+
+ const onload = () => {
+ // Initialize a map instance.
+ const map = new atlas.Map("map", {
+ view: "Auto",
+ // Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: "<Your Azure Maps Key>"
+ }
+ });
+ };
+
+ document.body.onload = onload;
+ ```
+
+7. Add a new HTML file named _https://docsupdatetracker.net/index.html_ in the project's root folder with this content:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <title>Azure Maps demo</title>
+ <script src="./dist/bundle.js" async></script>
+ <style>
+ html,
+ body,
+ #map {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ }
+ </style>
+ </head>
+ <body>
+ <div id="map"></div>
+ </body>
+ </html>
+ ```
+
+ Your project should now have the following files:
+
+ ```
+ Γö£ΓöÇΓöÇΓöÇnode_modules
+ Γö£ΓöÇΓöÇΓöÇhttps://docsupdatetracker.net/index.html
+ Γö£ΓöÇΓöÇΓöÇpackage-lock.json
+ Γö£ΓöÇΓöÇΓöÇpackage.json
+ Γö£ΓöÇΓöÇΓöÇwebpack.config.js
+ ΓööΓöÇΓöÇΓöÇsrc
+ ΓööΓöÇΓöÇΓöÇjs
+ ΓööΓöÇΓöÇΓöÇmain.js
+ ```
+
+8. Run the following command to generate a JavaScript file at _./dist/bundle.js_
+
+ ```powershell
+ npm run build
+ ```
+
+9. Open the file _https://docsupdatetracker.net/index.html_ in your web browser and view the rendered map. It should look like the following image:
+
+ :::image type="content" source="./media/how-to-use-npm-package/map-of-the-world.png" alt-text="A screenshot showing a map of the world.":::
+
+## Use other Azure Maps npm packages
+
+Azure Maps offers other modules as npm packages that can be integrated into your application. These modules include:
+- [azure-maps-drawing-tools]
+- [azure-maps-indoor]
+- [azure-maps-spatial-io]
+
+The following sample shows how to import a module and use it in your application. This sample uses [azure-maps-spatial-io] to read a `POINT(-122.34009 47.60995)` string as GeoJSON and renders it on the map using a bubble layer.
+
+1. Install the npm package.
+
+ ```powershell
+ npm install azure-maps-spatial-io
+ ```
+
+2. Then, use an import declaration to add the module to a source file:
+
+ ```js
+ import * as spatial from "azure-maps-spatial-io";
+ ```
+
+3. Use `spatial.io.ogc.WKT.read()` to parse the text.
+
+ ```js
+ import * as atlas from "azure-maps-control";
+ import * as spatial from "azure-maps-spatial-io";
+ import "azure-maps-control/dist/atlas.min.css";
+
+ const onload = () => {
+ // Initialize a map instance.
+ const map = new atlas.Map("map", {
+ center: [-122.34009, 47.60995],
+ zoom: 12,
+ view: "Auto",
+ // Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: "<Your Azure Maps Key>"
+ }
+ });
+
+ // Wait until the map resources are ready.
+ map.events.add("ready", () => {
+ // Create a data source and add it to the map.
+ const datasource = new atlas.source.DataSource();
+ map.sources.add(datasource);
+
+ // Create a layer to render the data
+ map.layers.add(new atlas.layer.BubbleLayer(datasource));
+
+ // Parse the point string.
+ var point = spatial.io.ogc.WKT.read("POINT(-122.34009 47.60995)");
+
+ // Add the parsed data to the data source.
+ datasource.add(point);
+ });
+ };
+
+ document.body.onload = onload;
+ ```
+
+4. Webpack 5 may throw errors about not being able to resolve some node.js core modules. Add these settings to your Webpack config file to fix the problem.
+
+ ```js
+ module.exports = {
+ // ...
+ resolve: {
+ fallback: { "crypto": false, "worker_threads": false }
+ }
+ };
+ ```
+
+This image is a screenshot of the sampleΓÇÖs output.
+++
+## Next steps
+
+Learn how to create and interact with a map:
+
+> [!div class="nextstepaction"]
+> [Create a map](map-create.md)
+
+Learn how to style a map:
+
+> [!div class="nextstepaction"]
+> [Choose a map style](choose-map-style.md)
+
+Learn best practices and see samples:
+
+> [!div class="nextstepaction"]
+> [Best practices](web-sdk-best-practices.md)
+
+> [!div class="nextstepaction"]
+> [Code samples](/samples/browse/?products=azure-maps)
+
+[azure-maps-control]: https://www.npmjs.com/package/azure-maps-control
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions
+[Webpack]: https://webpack.js.org/
+[style-loader]: https://webpack.js.org/loaders/style-loader/
+[azure-maps-drawing-tools]: ./set-drawing-options.md
+[azure-maps-indoor]: ./how-to-use-indoor-module.md
+[azure-maps-spatial-io]: ./how-to-use-spatial-io-module.md
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
Title: Add a polygon extrusion layer to a map | Microsoft Azure Maps
+ Title: Add a polygon extrusion layer to a map
+ description: How to add a polygon extrusion layer to the Microsoft Azure Maps Web SDK.
# Add a polygon extrusion layer to the map
-This article shows you how to use the polygon extrusion layer to render areas of `Polygon` and `MultiPolygon` feature geometries as extruded shapes. The Azure Maps Web SDK supports rendering of Circle geometries as defined in the [extended GeoJSON schema](extend-geojson.md#circle). These circles can be transformed into polygons when rendered on the map. All feature geometries may be updated easily when wrapped with the [atlas.Shape](/javascript/api/azure-maps-control/atlas.shape) class.
+This article shows you how to use the polygon extrusion layer to render areas of `Polygon` and `MultiPolygon` feature geometries as extruded shapes. The Azure Maps Web SDK supports rendering of Circle geometries as defined in the [extended GeoJSON schema]. These circles can be transformed into polygons when rendered on the map. All feature geometries may be updated easily when wrapped with the [atlas.Shape] class.
## Use a polygon extrusion layer
-Connect the [polygon extrusion layer](/javascript/api/azure-maps-control/atlas.layer.polygonextrusionlayer) to a data source. Then, loaded it on the map. The polygon extrusion layer renders the areas of a `Polygon` and `MultiPolygon` features as extruded shapes. The `height` and `base` properties of the polygon extrusion layer define the base distance from the ground and height of the extruded shape in **meters**. The following code shows how to create a polygon, add it to a data source, and render it using the Polygon extrusion layer class.
+Connect the [polygon extrusion layer] to a data source. Then, loaded it on the map. The polygon extrusion layer renders the areas of a `Polygon` and `MultiPolygon` features as extruded shapes. The `height` and `base` properties of the polygon extrusion layer define the base distance from the ground and height of the extruded shape in **meters**. The following code shows how to create a polygon, add it to a data source, and render it using the Polygon extrusion layer class.
> [!NOTE] > The `base` value defined in the polygon extrusion layer should be less than or equal to that of the `height`.
The [Create a Choropleth Map] sample shows an extruded choropleth map of the Uni
## Add a circle to the map
-Azure Maps uses an extended version of the GeoJSON schema that provides a [definition for circles](./extend-geojson.md#circle). An extruded circle can be rendered on the map by creating a `point` feature with a `subType` property of `Circle` and a numbered `Radius` property representing the radius in **meters**. For example:
+Azure Maps uses an extended version of the GeoJSON schema that provides a [definition for circles]. An extruded circle can be rendered on the map by creating a `point` feature with a `subType` property of `Circle` and a numbered `Radius` property representing the radius in **meters**. For example:
```javascript {
The Polygon Extrusion layer has several styling options. The [Polygon Extrusion
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Polygon](/javascript/api/azure-maps-control/atlas.data.polygon)
+> [Polygon]
> [!div class="nextstepaction"]
-> [polygon extrusion layer](/javascript/api/azure-maps-control/atlas.layer.polygonextrusionlayer)
+> [polygon extrusion layer]
-Additional resources:
+More resources:
> [!div class="nextstepaction"]
-> [Azure Maps GeoJSON specification extension](extend-geojson.md#circle)
-
-[Create a Choropleth Map]: https://samples.azuremaps.com/?sample=create-a-choropleth-map
-[Polygon Extrusion Layer Options]: https://samples.azuremaps.com/?sample=polygon-extrusion-layer-options
+> [Azure Maps GeoJSON specification extension]
+[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
+[Azure Maps GeoJSON specification extension]: extend-geojson.md#circle
[Create a Choropleth Map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Demos/Create%20a%20Choropleth%20Map/Create%20a%20Choropleth%20Map.html
+[Create a Choropleth Map]: https://samples.azuremaps.com/?sample=create-a-choropleth-map
+[definition for circles]: extend-geojson.md#circle
+[extended GeoJSON schema]: extend-geojson.md#circle
[Polygon Extrusion Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Polygons/Polygon%20Extrusion%20Layer%20Options/Polygon%20Extrusion%20Layer%20Options.html
+[Polygon Extrusion Layer Options]: https://samples.azuremaps.com/?sample=polygon-extrusion-layer-options
+[polygon extrusion layer]: /javascript/api/azure-maps-control/atlas.layer.polygonextrusionlayer
+[Polygon]: /javascript/api/azure-maps-control/atlas.data.polygon
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
description: In this article learn, how to get shape data drawn on a map using the Microsoft Azure Maps Web SDK. Previously updated : 06/15/2023 Last updated : 07/13/2023
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
The [Traffic Overlay Options] tool lets you switch between the different traffic
## Add traffic controls
-There are two different traffic controls that can be added to the map. The first control, `TrafficControl`, adds a toggle button that can be used to turn traffic on and off. Options for this control allow you to specify when traffic settings to use when show traffic. By default this control will display relative traffic flow and incident data, however, you could change this to show absolute traffic flow and no incidents if desired. The second control, `TrafficLegendControl`, adds a traffic flow legend to the map that helps user understand what the color code road highlights mean. This control will only appear on the map when traffic flow data is displayed on the map and will be hidden at all other times.
+There are two different traffic controls that can be added to the map. The first control, `TrafficControl`, adds a toggle button that can be used to turn traffic on and off. Options for this control allow you to specify when traffic settings to use when show traffic. By default this control displays relative traffic flow and incident data, however, you could change this behavior and show absolute traffic flow and no incidents if desired. The second control, `TrafficLegendControl`, adds a traffic flow legend to the map that helps user understand what the color code road highlights mean. This control only appears on the map when traffic flow data is displayed on the map and is hidden at all other times.
The following code shows how to add the traffic controls to the map.
The [Traffic controls] sample is a fully functional map that shows how to displa
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Map](/javascript/api/azure-maps-control/atlas.map)
+> [Map]
> [!div class="nextstepaction"]
-> [TrafficOptions](/javascript/api/azure-maps-control/atlas.trafficoptions)
+> [TrafficOptions]
Enhance your user experiences: > [!div class="nextstepaction"]
-> [Map interaction with mouse events](map-events.md)
+> [Map interaction with mouse events]
> [!div class="nextstepaction"]
-> [Building an accessible map](map-accessibility.md)
+> [Building an accessible map]
> [!div class="nextstepaction"]
-> [Code sample page](https://aka.ms/AzureMapsSamples)
+> [Code sample page]
-[Traffic Overlay]: https://samples.azuremaps.com/traffic/traffic-overlay
+[Building an accessible map]: map-accessibility.md
+[Code sample page]: https://aka.ms/AzureMapsSamples
+[Map interaction with mouse events]: map-events.md
+[Map]: /javascript/api/azure-maps-control/atlas.map
+[Traffic controls source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20controls/Traffic%20controls.html
[Traffic controls]: https://samples.azuremaps.com/traffic/traffic-controls
+[Traffic Overlay Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20Overlay%20Options/Traffic%20Overlay%20Options.html
[Traffic Overlay Options]: https://samples.azuremaps.com/traffic/traffic-overlay-options [Traffic Overlay source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20Overlay/Traffic%20Overlay.html
-[Traffic controls source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20controls/Traffic%20controls.html
-[Traffic Overlay Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20Overlay%20Options/Traffic%20Overlay%20Options.html
+[Traffic Overlay]: https://samples.azuremaps.com/traffic/traffic-overlay
+[TrafficOptions]: /javascript/api/azure-maps-control/atlas.trafficoptions
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
in certain markets, as such the market of the user is specified using the `setMk
<script type="text/javascript" src="https://www.bing.com/api/maps/mapcontrol?callback=initMap&setLang={language-code}&setMkt={market}&UR={region-code}" async defer></script> ```
-Here's an example of Bing Maps with the language set to "fr-FR".
+Here's an example of Bing Maps with the language set to `fr-FR`.
![Localized Bing Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg) **After: Azure Maps**
-Azure Maps only provides options for setting the language and regional view of the map. A market parameter isn't used to limit features. There are two different ways of setting the language and regional view of the map. The first option is to add this information to the global `atlas` namespace that results in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to `"Auto"`:
+Azure Maps only provides options for setting the language and regional view of the map. A market parameter isn't used to limit features. There are two different ways of setting the language and regional view of the map. The first option is to add this information to the global `atlas` namespace that results in all map control instances in your app defaulting to these settings. The following sets the language to French (`fr-FR`) and the regional view to `"Auto"`:
```javascript atlas.setLanguage('fr-FR');
map = new atlas.Map('myMap', {
> [!NOTE] > Azure Maps can load multiple map instances on the same page with different language and region settings. It is also possible to update these settings in the map after it has loaded. For a list of supported languages in Azure Maps, see [Localization support in Azure Maps].
-Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
+Here's an example of Azure Maps with the language set to "fr" and the user region set to `fr-FR`.
![Localized Azure Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg)
map.events.add('click', marker, function () {
When visualizing many data points on the map, points overlap each other, the map looks cluttered and it becomes difficult to see and use. Clustering of point data can be used to improve this user experience and also improve performance. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points.
-The following example loads a GeoJSON feed of earthquake data from the past week and add it to the map. Clusters are rendered as scaled and colored circles depending on the number of points they contain.
+The following example loads a GeoJSON feed of earthquake data from the past week and adds it to the map. Clusters are rendered as scaled and colored circles depending on the number of points they contain.
> [!NOTE] > There are several different algorithms used for pushpin clustering. Bing Maps uses a simple grid-based function, while Azure Maps uses a more advanced and visually appealing point-based clustering method. **Before: Bing Maps**
-In Bing Maps, GeoJSON data can be loaded using the GeoJSON module. Pushpins can be clustered by loading in the clustering module and using the clustering layer it contains.
+In Bing Maps, GeoJSON data can be loaded using the GeoJSON module. Pushpins are clustered by loading in the clustering module and using the clustering layer it contains.
```html <!DOCTYPE html>
In Azure Maps, data is added and managed by a data source. Layers connect to dat
* `cluster` ΓÇô Tells the data source to cluster point data. * `clusterRadius` - The radius in pixels to cluster points together.
-* `clusterMaxZoom` - The maximum zoom level that clustering occurs. Any additional zooming results in all points being rendered as symbols.
+* `clusterMaxZoom` - The maximum zoom level that clustering occurs. Any other zooming results in all points being rendered as symbols.
* `clusterProperties` - Defines custom properties that are calculated using expressions against all the points within each cluster and added to the properties of each cluster point. When clustering is enabled, the data source sends clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties on it:
Running this code in a browser displays a map that looks like the following imag
**After: Azure Maps**
-In Azure Maps, GeoJSON is the main data format used in the web SDK, additional spatial data formats can be easily integrated in using the [spatial IO module]. This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This returns all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering most of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
+In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial data formats can be easily integrated in using the [spatial IO module]. This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This returns all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering most of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
```html <!DOCTYPE html>
Learn more about migrating from Bing Maps to Azure Maps.
> [!div class="nextstepaction"] > [Migrate a web service](migrate-from-bing-maps-web-services.md)
-<!End Links-->
-[road tiles]: /rest/api/maps/render/getmaptile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
-[Cesium]: https://www.cesium.com/
-<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium-->
-[Cesium plugin]: /samples/azure-samples/azure-maps-cesium/azure-maps-cesium-js-plugin
-[Leaflet]: https://leafletjs.com/
-[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet
-[Leaflet plugin]: /samples/azure-samples/azure-maps-leaflet/azure-maps-leaflet-plugin
-[OpenLayers]: https://openlayers.org/
-<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers-->
-[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin
-
-<! If developing using a JavaScript framework, one of the following open-source projects may be useful ->
-[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
-[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
-[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
-[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
-
-<!-- Key features support ->
-[Contour layer code samples]: https://samples.azuremaps.com/?search=contour
-[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source
-[Animation module]: https://github.com/Azure-Samples/azure-maps-animations
-[Spatial IO module]: how-to-use-spatial-io-module.md
-[open-source modules for the web SDK]: open-source-projects.md#open-web-sdk-modules
-
-<! Topics >
-[Load a map]: #load-a-map
-[Localizing the map]: #localizing-the-map
-[Setting the map view]: #setting-the-map-view
-[Adding a pushpin]: #adding-a-pushpin
+[Add a Bubble layer]: map-add-bubble-layer.md
+[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Add a ground overlay]: #add-a-ground-overlay
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Add a heat map]: #add-a-heat-map
+[Add a polygon to the map]: map-add-shape.md#use-a-polygon-layer
+[Add a popup]: map-add-popup.md
+[Add a Symbol layer]: map-add-pin.md
+[Add controls to a map]: map-add-controls.md
+[Add drawing tools]: #add-drawing-tools
+[Add HTML Markers]: map-add-custom-html.md
+[Add KML data to the map]: #add-kml-data-to-the-map
+[Add lines to the map]: map-add-line-layer.md
+[Add tile layers]: map-add-tile-layer.md
[Adding a custom pushpin]: #adding-a-custom-pushpin
-[Adding a polyline]: #adding-a-polyline
[Adding a polygon]: #adding-a-polygon
+[Adding a polyline]: #adding-a-polyline
+[Adding a pushpin]: #adding-a-pushpin
+[Animation module]: https://github.com/Azure-Samples/azure-maps-animations
+[atlas.data namespace]: /javascript/api/azure-maps-control/atlas.data
+[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
+[atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
+[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
+[Azure Active Directory]: azure-maps-authentication.md#azure-ad-authentication
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps Glossary]: glossary.md
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Cesium plugin]: /samples/azure-samples/azure-maps-cesium/azure-maps-cesium-js-plugin
+[Cesium]: https://www.cesium.com/
+[Choose a map style]: choose-map-style.md
+[Cluster point data]: clustering-point-data-web-sdk.md
+[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
+[Contour layer code samples]: https://samples.azuremaps.com/?search=contour
+[Create a data source]: create-data-source-web-sdk.md
[Display an infobox]: #display-an-infobox
-[Pushpin clustering]: #pushpin-clustering
-[Add a heat map]: #add-a-heat-map
-[Overlay a tile layer]: #overlay-a-tile-layer
-[Show traffic data]: #show-traffic-data
-[Add a ground overlay]: #add-a-ground-overlay
-[Add KML data to the map]: #add-kml-data-to-the-map
-[Add drawing tools]: #add-drawing-tools
-
-<! Additional resources -->
-[Add a heat map layer]: map-add-heat-map-layer.md
+[Drawing tools module code samples]: https://samples.azuremaps.com#drawing-tools-module
+[free account]: https://azure.microsoft.com/free/
+[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source
[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer [Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
-[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
-[Choose a map style]: choose-map-style.md
-[Supported map styles]: supported-map-styles.md
-
-[Create a data source]: create-data-source-web-sdk.md
-[Add a Symbol layer]: map-add-pin.md
-[Add a Bubble layer]: map-add-bubble-layer.md
-[Cluster point data]: clustering-point-data-web-sdk.md
-[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
-[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
[HTML marker class]: /javascript/api/azure-maps-control/atlas.htmlmarker [HTML marker options]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
-[Add HTML Markers]: map-add-custom-html.md
-
-[Add lines to the map]: map-add-line-layer.md
+[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet
+[Leaflet plugin]: /samples/azure-samples/azure-maps-leaflet/azure-maps-leaflet-plugin
+[Leaflet]: https://leafletjs.com/
[Line layer options]: /javascript/api/azure-maps-control/atlas.linelayeroptions-
-[Add a polygon to the map]: map-add-shape.md#use-a-polygon-layer
-[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Load a map]: #load-a-map
+[Localization support in Azure Maps]: supported-languages.md
+[Localizing the map]: #localizing-the-map
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin
+[OpenLayers]: https://openlayers.org/
+[open-source Azure Maps Web SDK modules]: open-source-projects.md#open-web-sdk-modules
+[open-source modules for the web SDK]: open-source-projects.md#open-web-sdk-modules
+[Overlay a tile layer]: #overlay-a-tile-layer
+[Overlay an image]: map-add-image-layer.md
[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions-
-[Add a popup]: map-add-popup.md
+[Popup class]: /javascript/api/azure-maps-control/atlas.popup
+[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Pushpin clustering]: #pushpin-clustering
[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
-[Popup class]: /javascript/api/azure-maps-control/atlas.popup
-[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
-
-[Add tile layers]: map-add-tile-layer.md
-[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
-[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
-
+[road tiles]: /rest/api/maps/render/getmaptile
+[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[Setting the map view]: #setting-the-map-view
+[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
+[Show traffic data]: #show-traffic-data
[Show traffic on the map]: map-show-traffic.md
-[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
-[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls
-
-[Overlay an image]: map-add-image-layer.md
-[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
-
-[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer [SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions-
-[Use the drawing tools module]: set-drawing-options.md
-[Drawing tools module code samples]: https://samples.azuremaps.com#drawing-tools-module
-
-<!>
-
-[free account]: https://azure.microsoft.com/free/
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Spatial IO module]: how-to-use-spatial-io-module.md
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
-[Azure Active Directory]: azure-maps-authentication.md#azure-ad-authentication
-[Use the Azure Maps map control]: how-to-use-map-control.md
-[atlas.data namespace]: /javascript/api/azure-maps-control/atlas.data
-[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
-[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+[Supported map styles]: supported-map-styles.md
+[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
+[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
+[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
+[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
+[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
[turf js]: https://turfjs.org
-[Azure Maps Glossary]: glossary.md
-[Add controls to a map]: map-add-controls.md
-[Localization support in Azure Maps]: supported-languages.md
-[open-source Azure Maps Web SDK modules]: open-source-projects.md#open-web-sdk-modules
-[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
-[atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Use the Azure Maps map control]: how-to-use-map-control.md
+[Use the drawing tools module]: set-drawing-options.md
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Azure Maps also supports:
Azure Maps also supports: * `typeahead` - Specifies if the query is interpreted as a partial input and the search enters predictive mode (autosuggest/autocomplete).
-* `countrySet` ΓÇô A comma-separated list of ISO2 countries codes in which to limit the search to.
+* `countrySet` ΓÇô A comma-separated list of ISO2 country codes in which to limit the search to.
* `lat`/`lon`, `topLeft`/`btmRight`, `radius` ΓÇô Specify user location and area to make the results more locally relevant. * `ofs` - Page through the results in combination with `maxResults` parameter.
The following table cross-references the Bing Maps API parameters with the compa
| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. | | `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-The Azure Maps routing API also supports truck routing within the same API. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
+The Azure Maps routing API also supports truck routing within the same API. The following table cross-references the other Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
| Bing Maps API parameter | Comparable Azure Maps API parameter | ||--|
The following table cross-references the Bing Maps API parameters with the compa
| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. | | `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-The Azure Maps routing API also supports truck routing parameter within the same API to ensure logical paths are calculated. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
+The Azure Maps routing API also supports truck routing parameter within the same API to ensure logical paths are calculated. The following table cross-references the other Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
| Bing Maps API parameter | Comparable Azure Maps API parameter | |--|--|
The Azure Maps route directions API doesn't currently return speed limit data, h
The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you're already using the Azure Maps Web SDK to visualize the data.
-This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country/region level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. To see a fully functional example of this snapping logic, see the [Basic snap to road logic] sample in the Azure Maps samples.
+This approach however only snaps to the road segments that are loaded within the map view. When zoomed out at country/region level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. To see a fully functional example of this snapping logic, see the [Basic snap to road logic] sample in the Azure Maps samples.
**Using the Azure Maps vector tiles directly to snap coordinates**
In Azure Maps, pushpins can also be added to a static map image by specifying th
> `&pins=iconType|pinStyles||pinLocation1|pinLocation2|...`
-Additional styles can be used by adding more `pins` parameters to the URL with a different style and set of locations.
+More styles can be used by adding more `pins` parameters to the URL with a different style and set of locations.
Regarding pin locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma** separating longitude and latitude in Azure Maps.
In Bing Maps, lines, and polygons can be added to a static map image by using th
> `&drawCurve=shapeType,styleType,location1,location2...`
-More styles can be used by adding additional `drawCurve` parameters to the URL with a different style and set of locations.
+More styles can be used by adding more `drawCurve` parameters to the URL with a different style and set of locations.
Locations in Bing Maps are specified with the format `latitude1,longitude1_latitude2,longitude2_…`. Locations can also be encoded.
Learn more about the Azure Maps REST services.
> [!div class="nextstepaction"] > [Best practices for using the search service](how-to-use-best-practices-for-search.md)
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-
-[Search]: /rest/api/maps/search
-[Route directions]: /rest/api/maps/route/getroutedirections
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
-[Render]: /rest/api/maps/render/getmapimage
-[Route Range]: /rest/api/maps/route/getrouterange
-[POST Route directions]: /rest/api/maps/route/postroutedirections
-[Route]: /rest/api/maps/route
-[Time Zone]: /rest/api/maps/timezone
- [Azure Maps Creator]: creator-indoor-maps.md
-[Spatial operations]: /rest/api/maps/spatial
-[Map Tiles]: /rest/api/maps/render/getmaptile
-[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Azure Maps supported views]: supported-languages.md#azure-maps-supported-views
+[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor
+[Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
+[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
+[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
+[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
[Batch routing]: /rest/api/maps/route/postroutedirectionsbatchpreview
-[Traffic]: /rest/api/maps/traffic
-[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
-[Weather services]: /rest/api/maps/weather
-
-[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md-
+[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
[free account]: https://azure.microsoft.com/free/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
- [Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
-[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview-
-[Authentication with Azure Maps]: azure-maps-authentication.md
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
[Localization support in Azure Maps]: supported-languages.md
-[Azure Maps supported views]: supported-languages.md#azure-maps-supported-views
-
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
-[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
-
-[POI search]: /rest/api/maps/search/get-search-poi
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Map image render]: /rest/api/maps/render/getmapimagerytile
+[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Map Tiles]: /rest/api/maps/render/getmaptile
+[nearby search]: /rest/api/maps/search/getsearchnearby
+[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
[POI category search]: /rest/api/maps/search/get-search-poi-category
-[Calculate route]: /rest/api/maps/route/getroutedirections
-[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
-
-[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path
-[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-
+[POI search]: /rest/api/maps/search/get-search-poi
+[POST Route directions]: /rest/api/maps/route/postroutedirections
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md
-[turf js]: https://turfjs.org
-[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
-
-[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Supported map styles]: supported-map-styles.md
[Render custom data on a raster map]: how-to-render-custom-data.md-
+[Render]: /rest/api/maps/render/getmapimage
+[Route directions]: /rest/api/maps/route/getroutedirections
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Route Range]: /rest/api/maps/route/getrouterange
+[Route]: /rest/api/maps/route
[Search along route]: /rest/api/maps/search/postsearchalongroute
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
-[nearby search]: /rest/api/maps/search/getsearchnearby
-[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
[Search for a location using Azure Maps Search services]: how-to-search-for-address.md-
+[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search]: /rest/api/maps/search
+[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path
+[Spatial operations]: /rest/api/maps/spatial
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported map styles]: supported-map-styles.md
+[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
+[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
+[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
+[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
+[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
+[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
+[Time Zone]: /rest/api/maps/timezone
[Traffic flow segments]: /rest/api/maps/traffic/gettrafficflowsegment [Traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile [Traffic incident details]: /rest/api/maps/traffic/gettrafficincidentdetail [Traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile [Traffic incident viewport]: /rest/api/maps/traffic/gettrafficincidentviewport-
-[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
-[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
-[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
-[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
-[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
-[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
-
-[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
-
-[Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview
-[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor
-[Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md
+[Traffic]: /rest/api/maps/traffic
+[turf js]: https://turfjs.org
+[Weather services]: /rest/api/maps/weather
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following table provides a high-level list of Bing Maps features and the rel
| Traffic Incidents | Γ£ô | | Configuration driven maps | N/A |
-<sup>1</sup> While there is no direct replacement for the Bing Maps *Snap to road* service, this functionality can be implemented using the Azure Maps [Route - Get Route Directions] REST API. For a complete code sample demonstrating the snap to road functionality, see the [Basic snap to road logic] sample that demonstrates how to snap individual points to the rendered roads on the map. Also see the [Snap points to logical route path] sample that shows how to snap points to the road network to form a logical path.
+<sup>1</sup> While there's no direct replacement for the Bing Maps *Snap to road* service, this functionality can be implemented using the Azure Maps [Route - Get Route Directions] REST API. For a complete code sample demonstrating the snap to road functionality, see the [Basic snap to road logic] sample that demonstrates how to snap individual points to the rendered roads on the map. Also see the [Snap points to logical route path] sample that shows how to snap points to the road network to form a logical path.
Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication.
There are no resources that require cleanup.
Learn the details of how to migrate your Bing Maps application with these articles: > [!div class="nextstepaction"]
-> [Migrate a web app](migrate-from-bing-maps-web-app.md)
+> [Migrate a web app]
+[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[free Azure account]: https://azure.microsoft.com/free/
-[Azure portal]: https://portal.azure.com/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog
+[Azure Maps code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
+[Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback
[Azure Maps is also available in Power BI]: power-bi-visual-get-started.md
-[Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31
[Azure Maps pricing page]: https://azure.microsoft.com/pricing/details/azure-maps/
-[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
-[Azure Maps term of use]: https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA
-[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
-[azure.com]: https://azure.com
-[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
+[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
+[Azure Maps product page]: https://azure.com/maps
[Azure Maps Q&A]: /answers/topics/azure-maps.html
+[Azure Maps term of use]: https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA
+[Azure portal]: https://portal.azure.com/
+[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
[Azure support options]: https://azure.microsoft.com/support/options/
-[Azure Maps product page]: https://azure.com/maps
-[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
-[Azure Maps code samples]: https://aka.ms/AzureMapsSamples
-[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
-[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
-[Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog
-[Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback
+[azure.com]: https://azure.com
[Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[free Azure account]: https://azure.microsoft.com/free/
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31
+[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
+[Migrate a web app]: migrate-from-bing-maps-web-app.md
+[Route - Get Route Directions]: /rest/api/maps/route/get-route-directions
[Snap points to logical route path]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=snap-points-to-logical-route-path
-[Route - Get Route Directions]: https://learn.microsoft.com/rest/api/maps/route/get-route-directions
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
The Azure Maps Android SDK has an API interface that is similar to the Web SDK.
All examples are provided in Java; however, you can use Kotlin with the Azure Maps Android SDK.
-For more information on developing with the Android SDK by Azure Maps, see the [How-to guides for the Azure Maps Android SDK](how-to-use-android-map-control-library.md).
+For more information on developing with the Android SDK by Azure Maps, see the [How-to guides for the Azure Maps Android SDK].
## Prerequisites
Here's an example of Azure Maps with the language set to "fr-FR".
![Azure Maps localization](media/migrate-google-maps-android-app/azure-maps-localization.png)
-Review the complete list of [Supported languages](supported-languages.md).
+Review the complete list of [Supported languages].
## Setting the map view
mapControl!!.onReady { map: AzureMap ->
**Additional resources:**
-* [Supported map styles](supported-map-styles.md)
+* [Supported map styles]
## Adding a marker
mapControl!!.onReady { map: AzureMap ->
## Adding a custom marker
-Custom images can be used to represent points on a map. The map in examples below uses a custom image to display a point on the map. The point is at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
+Custom images can be used to represent points on a map. The map in the following examples use a custom image to display a point on the map. The point is at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
![yellow pushpin image](media/migrate-google-maps-web-app/yellow-pushpin.png)<br/> yellow-pushpin.png
public override fun onMapReady(googleMap: GoogleMap) {
In Azure Maps, polylines are called `LineString` or `MultiLineString` objects. Add these objects to a data source and render them using a line layer. Set the stroke width using the `strokeWidth` option. Add a stroke dash array using the `strokeDashArray` option.
-The stroke width and the dash array "pixel" units in the Azure Maps Web SDK, is the same as in the Google Maps service. Both accept the same values to produce the same results.
+The stroke width and the dash array "pixel" units in the Azure Maps Web SDK is the same as in the Google Maps service. Both accept the same values to produce the same results.
::: zone pivot="programming-language-java-android"
No resources to be cleaned up.
Learn more about the Azure Maps Android SDK: > [!div class="nextstepaction"]
-> [Get started with Azure Maps Android SDK](how-to-use-android-map-control-library.md)
+> [Get started with Azure Maps Android SDK]
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[free account]: https://azure.microsoft.com/free/
+[Get started with Azure Maps Android SDK]: how-to-use-android-map-control-library.md
+[How-to guides for the Azure Maps Android SDK]: how-to-use-android-map-control-library.md
[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported languages]: supported-languages.md
+[Supported map styles]: supported-map-styles.md
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
You may use Custom images to represent points on a map. The following map uses a
<center>
-![yellow pushpin image](media/migrate-google-maps-web-app/yellow-pushpin.png)<br/>
+![yellow pushpin image](media/migrate-google-maps-web-app/yellow-pushpin.png)<br>
yellow-pushpin.png</center> #### Before: Google Maps
The following appendix provides a cross reference of the commonly used classes i
| Google Maps | Azure Maps | ||-|
-| `google.maps.Map` | [atlas.Map](/javascript/api/azure-maps-control/atlas.map) |
-| `google.maps.InfoWindow` | [atlas.Popup](/javascript/api/azure-maps-control/atlas.popup) |
-| `google.maps.InfoWindowOptions` | [atlas.PopupOptions](/javascript/api/azure-maps-control/atlas.popupoptions) |
-| `google.maps.LatLng` | [atlas.data.Position](/javascript/api/azure-maps-control/atlas.data.position) |
-| `google.maps.LatLngBounds` | [atlas.data.BoundingBox](/javascript/api/azure-maps-control/atlas.data.boundingbox) |
-| `google.maps.MapOptions` | [atlas.CameraOptions](/javascript/api/azure-maps-control/atlas.cameraoptions)<br/>[atlas.CameraBoundsOptions](/javascript/api/azure-maps-control/atlas.cameraboundsoptions)<br/>[atlas.ServiceOptions](/javascript/api/azure-maps-control/atlas.serviceoptions)<br/>[atlas.StyleOptions](/javascript/api/azure-maps-control/atlas.styleoptions)<br/>[atlas.UserInteractionOptions](/javascript/api/azure-maps-control/atlas.userinteractionoptions) |
-| `google.maps.Point` | [atlas.Pixel](/javascript/api/azure-maps-control/atlas.pixel) |
+| `google.maps.Map` | [atlas.Map] |
+| `google.maps.InfoWindow` | [atlas.Popup] |
+| `google.maps.InfoWindowOptions` | [atlas.PopupOptions] |
+| `google.maps.LatLng` | [atlas.data.Position] |
+| `google.maps.LatLngBounds` | [atlas.data.BoundingBox] |
+| `google.maps.MapOptions` | [atlas.CameraOptions]<br>[atlas.CameraBoundsOptions]<br>[atlas.ServiceOptions]<br>[atlas.StyleOptions]<br>[atlas.UserInteractionOptions] |
+| `google.maps.Point` | [atlas.Pixel] |
## Overlay Classes | Google Maps | Azure Maps | |--|-|
-| `google.maps.Marker` | [atlas.HtmlMarker](/javascript/api/azure-maps-control/atlas.htmlmarker)<br/>[atlas.data.Point](/javascript/api/azure-maps-control/atlas.data.point) |
-| `google.maps.MarkerOptions` | [atlas.HtmlMarkerOptions](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)<br/>[atlas.layer.SymbolLayer](/javascript/api/azure-maps-control/atlas.layer.symbollayer)<br/>[atlas.SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions)<br/>[atlas.IconOptions](/javascript/api/azure-maps-control/atlas.iconoptions)<br/>[atlas.TextOptions](/javascript/api/azure-maps-control/atlas.textoptions)<br/>[atlas.layer.BubbleLayer](/javascript/api/azure-maps-control/atlas.layer.bubblelayer)<br/>[atlas.BubbleLayerOptions](/javascript/api/azure-maps-control/atlas.bubblelayeroptions) |
-| `google.maps.Polygon` | [atlas.data.Polygon](/javascript/api/azure-maps-control/atlas.data.polygon) |
-| `google.maps.PolygonOptions` |[atlas.layer.PolygonLayer](/javascript/api/azure-maps-control/atlas.layer.polygonlayer)<br/> [atlas.PolygonLayerOptions](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)<br/> [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/> [atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions)|
-| `google.maps.Polyline` | [atlas.data.LineString](/javascript/api/azure-maps-control/atlas.data.linestring) |
-| `google.maps.PolylineOptions` | [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/>[atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions) |
+| `google.maps.Marker` | [atlas.HtmlMarker]<br>[atlas.data.Point] |
+| `google.maps.MarkerOptions` | [atlas.HtmlMarkerOptions]<br>[atlas.layer.SymbolLayer]<br>[atlas.SymbolLayerOptions]<br>[atlas.IconOptions]<br>[atlas.TextOptions]<br>[atlas.layer.BubbleLayer]<br>[atlas.BubbleLayerOptions] |
+| `google.maps.Polygon` | [atlas.data.Polygon] |
+| `google.maps.PolygonOptions` |[atlas.layer.PolygonLayer]<br>[atlas.PolygonLayerOptions]<br> [atlas.layer.LineLayer]<br>[atlas.LineLayerOptions]|
+| `google.maps.Polyline` | [atlas.data.LineString] |
+| `google.maps.PolylineOptions` | [atlas.layer.LineLayer]<br>[atlas.LineLayerOptions] |
| `google.maps.Circle` | See [Add a circle to the map] |
-| `google.maps.ImageMapType` | [atlas.TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) |
-| `google.maps.ImageMapTypeOptions` | [atlas.TileLayerOptions](/javascript/api/azure-maps-control/atlas.tilelayeroptions) |
-| `google.maps.GroundOverlay` | [atlas.layer.ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer)<br/>[atlas.ImageLayerOptions](/javascript/api/azure-maps-control/atlas.imagelayeroptions) |
+| `google.maps.ImageMapType` | [atlas.TileLayer] |
+| `google.maps.ImageMapTypeOptions` | [atlas.TileLayerOptions] |
+| `google.maps.GroundOverlay` | [atlas.layer.ImageLayer]<br>[atlas.ImageLayerOptions] |
## Service Classes The Azure Maps Web SDK includes a services module, which can be loaded separately. This module wraps the Azure Maps REST services with a web API and can be used in JavaScript, TypeScript, and Node.js applications.
-| Google Maps | Azure Maps |
-|-|-|
-| `google.maps.Geocoder` | [atlas.service.SearchUrl](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
-| `google.maps.GeocoderRequest` | [atlas.SearchAddressOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressoptions)<br/>[atlas.SearchAddressRevrseOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreverseoptions)<br/>[atlas.SearchAddressReverseCrossStreetOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreversecrossstreetoptions)<br/>[atlas.SearchAddressStructuredOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressstructuredoptions)<br/>[atlas.SearchAlongRouteOptions](/javascript/api/azure-maps-rest/atlas.service.searchalongrouteoptions)<br/>[atlas.SearchFuzzyOptions](/javascript/api/azure-maps-rest/atlas.service.searchfuzzyoptions)<br/>[atlas.SearchInsideGeometryOptions](/javascript/api/azure-maps-rest/atlas.service.searchinsidegeometryoptions)<br/>[atlas.SearchNearbyOptions](/javascript/api/azure-maps-rest/atlas.service.searchnearbyoptions)<br/>[atlas.SearchPOIOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoioptions)<br/>[atlas.SearchPOICategoryOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoicategoryoptions) |
-| `google.maps.DirectionsService` | [atlas.service.RouteUrl](/javascript/api/azure-maps-rest/atlas.service.routeurl) |
-| `google.maps.DirectionsRequest` | [atlas.CalculateRouteDirectionsOptions](/javascript/api/azure-maps-rest/atlas.service.calculateroutedirectionsoptions) |
-| `google.maps.places.PlacesService` | [f](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
+| Google Maps | Azure Maps |
+||-|
+| `google.maps.Geocoder` | [atlas.service.SearchUrl] |
+| `google.maps.GeocoderRequest` | [atlas.SearchAddressOptions]<br>[atlas.SearchAddressRevrseOptions]<br>[atlas.SearchAddressReverseCrossStreetOptions]<br>[atlas.SearchAddressStructuredOptions]<br>[atlas.SearchAlongRouteOptions]<br>[atlas.SearchFuzzyOptions]<br>[atlas.SearchInsideGeometryOptions]<br>[atlas.SearchNearbyOptions]<br>[atlas.SearchPOIOptions]<br>[atlas.SearchPOICategoryOptions] |
+| `google.maps.DirectionsService` | [atlas.service.RouteUrl] |
+| `google.maps.DirectionsRequest` | [atlas.CalculateRouteDirectionsOptions] |
+| `google.maps.places.PlacesService` | [f] |
## Libraries
Libraries add more functionality to the map. Many of these libraries are in
the core SDK of Azure Maps. Here are some equivalent classes to use in place of these Google Maps libraries
-| Google Maps | Azure Maps |
-|--|--|
-| Drawing library | [Drawing tools module](set-drawing-options.md) |
-| Geometry library | [atlas.math](/javascript/api/azure-maps-control/atlas.math) |
-| Visualization library | [Heat map layer](map-add-heat-map-layer.md) |
+| Google Maps | Azure Maps |
+|--||
+| Drawing library | [Drawing tools module] |
+| Geometry library | [atlas.math] |
+| Visualization library | [Heat map layer] |
## Clean up resources
No resources to be cleaned up.
Learn more about migrating to Azure Maps: > [!div class="nextstepaction"]
-> [Migrate a web service](migrate-from-google-maps-web-services.md)
-
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[free account]: https://azure.microsoft.com/free/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
-
-[road tiles]: /rest/api/maps/render/getmaptile
-[satellite tiles]: /rest/api/maps/render/getmapimagerytile
-
-[Cesium documentation]: https://www.cesium.com/
-[Leaflet code sample]: https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet
-[Leaflet documentation]: https://leafletjs.com/
-[OpenLayers documentation]: https://openlayers.org/
-
-[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
-[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
-[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
-[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
+> [Migrate a web service]
[*atlas.data* namespace]: /javascript/api/azure-maps-control/atlas.data [*atlas.Shape*]: /javascript/api/azure-maps-control/atlas.shape
-[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
-
-[npm module]: how-to-use-map-control.md
-
-[Load a map]: #load-a-map
-[Localizing the map]: #localizing-the-map
-[Setting the map view]: #setting-the-map-view
-[Adding a marker]: #adding-a-marker
-[Adding a custom marker]: #adding-a-custom-marker
-[Adding a polyline]: #adding-a-polyline
-[Adding a polygon]: #adding-a-polygon
-[Display an info window]: #display-an-info-window
-[Import a GeoJSON file]: #import-a-geojson-file
-[Marker clustering]: #marker-clustering
-[Add a heat map]: #add-a-heat-map
-[Overlay a tile layer]: #overlay-a-tile-layer
-[Show traffic data]: #show-traffic-data
+[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
+[Add a Bubble layer]: map-add-bubble-layer.md
+[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
[Add a ground overlay]: #add-a-ground-overlay
-[Add KML data to the map]: #add-kml-data-to-the-map
-
-[Use the Azure Maps map control]: how-to-use-map-control.md
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Add a heat map]: #add-a-heat-map
+[Add a polygon to the map]: map-add-shape.md
+[Add a popup]: map-add-popup.md
+[Add a Symbol layer]: map-add-pin.md
[Add controls to a map]: map-add-controls.md
-[Localization support in Azure Maps]: supported-languages.md
-
+[Add HTML Markers]: map-add-custom-html.md
+[Add KML data to the map]: #add-kml-data-to-the-map
+[Add lines to the map]: map-add-line-layer.md
+[Add tile layers]: map-add-tile-layer.md
+[Adding a custom marker]: #adding-a-custom-marker
+[Adding a marker]: #adding-a-marker
+[Adding a polygon]: #adding-a-polygon
+[Adding a polyline]: #adding-a-polyline
+[atlas.BubbleLayerOptions]: /javascript/api/azure-maps-control/atlas.bubblelayeroptions
+[atlas.CalculateRouteDirectionsOptions]: /javascript/api/azure-maps-rest/atlas.service.calculateroutedirectionsoptions
+[atlas.CameraBoundsOptions]: /javascript/api/azure-maps-control/atlas.cameraboundsoptions
+[atlas.CameraOptions]: /javascript/api/azure-maps-control/atlas.cameraoptions
+[atlas.data.BoundingBox]: /javascript/api/azure-maps-control/atlas.data.boundingbox
+[atlas.data.LineString]: /javascript/api/azure-maps-control/atlas.data.linestring
+[atlas.data.Point]: /javascript/api/azure-maps-control/atlas.data.point
+[atlas.data.Polygon]: /javascript/api/azure-maps-control/atlas.data.polygon
+[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+[atlas.data.Position]: /javascript/api/azure-maps-control/atlas.data.position
+[atlas.HtmlMarker]: /javascript/api/azure-maps-control/atlas.htmlmarker
+[atlas.HtmlMarkerOptions]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
+[atlas.IconOptions]: /javascript/api/azure-maps-control/atlas.iconoptions
+[atlas.ImageLayerOptions]: /javascript/api/azure-maps-control/atlas.imagelayeroptions
+[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
+[atlas.layer.BubbleLayer]: /javascript/api/azure-maps-control/atlas.layer.bubblelayer
+[atlas.layer.ImageLayer]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+[atlas.layer.LineLayer]: /javascript/api/azure-maps-control/atlas.layer.linelayer
+[atlas.layer.PolygonLayer]: /javascript/api/azure-maps-control/atlas.layer.polygonlayer
+[atlas.layer.SymbolLayer]: /javascript/api/azure-maps-control/atlas.layer.symbollayer
+[atlas.LineLayerOptions]: /javascript/api/azure-maps-control/atlas.linelayeroptions
+[atlas.Map]: /javascript/api/azure-maps-control/atlas.map
+[atlas.math]: /javascript/api/azure-maps-control/atlas.math
+[atlas.Pixel]: /javascript/api/azure-maps-control/atlas.pixel
+[atlas.PolygonLayerOptions]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions
+[atlas.Popup]: /javascript/api/azure-maps-control/atlas.popup
+[atlas.PopupOptions]: /javascript/api/azure-maps-control/atlas.popupoptions
+[atlas.SearchAddressOptions]: /javascript/api/azure-maps-rest/atlas.service.searchaddressoptions
+[atlas.SearchAddressReverseCrossStreetOptions]: /javascript/api/azure-maps-rest/atlas.service.searchaddressreversecrossstreetoptions
+[atlas.SearchAddressRevrseOptions]: /javascript/api/azure-maps-rest/atlas.service.searchaddressreverseoptions
+[atlas.SearchAddressStructuredOptions]: /javascript/api/azure-maps-rest/atlas.service.searchaddressstructuredoptions
+[atlas.SearchAlongRouteOptions]: /javascript/api/azure-maps-rest/atlas.service.searchalongrouteoptions
+[atlas.SearchFuzzyOptions]: /javascript/api/azure-maps-rest/atlas.service.searchfuzzyoptions
+[atlas.SearchInsideGeometryOptions]: /javascript/api/azure-maps-rest/atlas.service.searchinsidegeometryoptions
+[atlas.SearchNearbyOptions]: /javascript/api/azure-maps-rest/atlas.service.searchnearbyoptions
+[atlas.SearchPOICategoryOptions]: /javascript/api/azure-maps-rest/atlas.service.searchpoicategoryoptions
+[atlas.SearchPOIOptions]: /javascript/api/azure-maps-rest/atlas.service.searchpoioptions
+[atlas.service.RouteUrl]: /javascript/api/azure-maps-rest/atlas.service.routeurl
+[atlas.service.SearchUrl]: /javascript/api/azure-maps-rest/atlas.service.searchurl
+[atlas.ServiceOptions]: /javascript/api/azure-maps-control/atlas.serviceoptions
+[atlas.StyleOptions]: /javascript/api/azure-maps-control/atlas.styleoptions
+[atlas.SymbolLayerOptions]: /javascript/api/azure-maps-control/atlas.symbollayeroptions
+[atlas.TextOptions]: /javascript/api/azure-maps-control/atlas.textoptions
+[atlas.TileLayer]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
+[atlas.TileLayerOptions]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
+[atlas.UserInteractionOptions]: /javascript/api/azure-maps-control/atlas.userinteractionoptions
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Cesium documentation]: https://www.cesium.com/
[Choose a map style]: choose-map-style.md
-[Supported map styles]: supported-map-styles.md
-
-[Create a data source]: create-data-source-web-sdk.md
-[Add a Symbol layer]: map-add-pin.md
-[Add a Bubble layer]: map-add-bubble-layer.md
[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
-[Add HTML Markers]: map-add-custom-html.md
-[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
-[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
-[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
+[Create a data source]: create-data-source-web-sdk.md
+[Create a Fullscreen Control]: https://samples.azuremaps.com/?sample=fullscreen-control
+[Display an info window]: #display-an-info-window
+[Drawing tools module]: set-drawing-options.md
+[Drawing tools]: map-add-drawing-toolbar.md
+[f]: /javascript/api/azure-maps-rest/atlas.service.searchurl
+[free account]: https://azure.microsoft.com/free/
+[Get information from a coordinate (reverse geocode)]: map-get-information-from-coordinate.md
+[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer
+[Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
+[Heat map layer]: map-add-heat-map-layer.md
[HTML marker class]: /javascript/api/azure-maps-control/atlas.htmlmarker [HTML marker options]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions-
-[Add lines to the map]: map-add-line-layer.md
+[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+[Import a GeoJSON file]: #import-a-geojson-file
+[Leaflet code sample]: https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet
+[Leaflet documentation]: https://leafletjs.com/
+[Limit Map to Two Finger Panning]: https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning
+[Limit Scroll Wheel Zoom]: https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom
[Line layer options]: /javascript/api/azure-maps-control/atlas.linelayeroptions-
-[Add a polygon to the map]: map-add-shape.md
-[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Load a map]: #load-a-map
+[Localization support in Azure Maps]: supported-languages.md
+[Localizing the map]: #localizing-the-map
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Marker clustering]: #marker-clustering
+[Migrate a web service]: migrate-from-google-maps-web-services.md
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[npm module]: how-to-use-map-control.md
+[OpenLayers documentation]: https://openlayers.org/
+[Overlay a tile layer]: #overlay-a-tile-layer
+[Overlay an image]: map-add-image-layer.md
[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions-
-[Add a popup]: map-add-popup.md
+[Popup class]: /javascript/api/azure-maps-control/atlas.popup
+[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
-[Popup class]: /javascript/api/azure-maps-control/atlas.popup
-[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
+[road tiles]: /rest/api/maps/render/getmaptile
+[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui
+[Search for points of interest]: map-search-location.md
+[Setting the map view]: #setting-the-map-view
+[Show directions from A to B]: map-route.md
+[Show traffic data]: #show-traffic-data
+[Show traffic on the map]: map-show-traffic.md
+[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
+[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
[spatial IO module]: /javascript/api/azure-maps-spatial-io/-
-[Add a heat map layer]: map-add-heat-map-layer.md
-[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer
-[Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
-
-[Add tile layers]: map-add-tile-layer.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported map styles]: supported-map-styles.md
+[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
+[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer [Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions-
-[Show traffic on the map]: map-show-traffic.md
[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options-
-[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
-[Overlay an image]: map-add-image-layer.md
-[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
-
-[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
-[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
-[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
-[Drawing tools]: map-add-drawing-toolbar.md
-[Limit Map to Two Finger Panning]: https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning
-[Limit Scroll Wheel Zoom]: https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom
-[Create a Fullscreen Control]: https://samples.azuremaps.com/?sample=fullscreen-control
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Use the Azure Maps map control]: how-to-use-map-control.md
[Using the Azure Maps services module]: how-to-use-services-module.md
-[Search for points of interest]: map-search-location.md
-[Get information from a coordinate (reverse geocode)]: map-get-information-from-coordinate.md
-[Show directions from A to B]: map-route.md
-[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Learn more about Azure Maps REST
> [!div class="nextstepaction"] > [Best practices for search](how-to-use-best-practices-for-search.md)
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Authentication with Azure Maps]: azure-maps-authentication.md
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[free account]: https://azure.microsoft.com/free/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Route]: /rest/api/maps/route
-[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
-[Search]: /rest/api/maps/search
-[Calculate routes and directions]: #calculate-routes-and-directions
-[Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
-[Render]: /rest/api/maps/render/getmapimage
-[Time Zone]: /rest/api/maps/timezone
[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
-[Spatial operations]: /rest/api/maps/spatial
-[Traffic]: /rest/api/maps/traffic
-[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[best practices for search]: how-to-use-best-practices-for-search.md
-
-[Localization support in Azure Maps]: supported-languages.md
-[Authentication with Azure Maps]: azure-maps-authentication.md
-[supported search categories]: supported-search-categories.md
-
-[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
-[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
-[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
-[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
-
-[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
-[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview-
-[POI search]: /rest/api/maps/search/getsearchpoi
-[POI category search]: /rest/api/maps/search/getsearchpoicategory
-[Nearby search]: /rest/api/maps/search/getsearchnearby
-[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
-[Search along route]: /rest/api/maps/search/postsearchalongroute
-
-[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
-[Calculate route]: /rest/api/maps/route/getroutedirections
[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview-
-[calculating routable ranges]: /rest/api/maps/route/getrouterange
[best practices for routing]: how-to-use-best-practices-for-routing.md
+[best practices for search]: how-to-use-best-practices-for-search.md
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Calculate routes and directions]: #calculate-routes-and-directions
+[calculating routable ranges]: /rest/api/maps/route/getrouterange
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[documentation]: how-to-use-services-module.md
+[free account]: https://azure.microsoft.com/free/
+[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
+[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices
+[Localization support in Azure Maps]: supported-languages.md
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
[Map image render]: /rest/api/maps/render/getmapimagerytile
-[Render custom data on a raster map]: how-to-render-custom-data.md
-
-[Map tile]: /rest/api/maps/render/getmaptile
[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
-[Upload pins and path data]: how-to-render-custom-data.md#upload-pins-and-path-data
+[Map tile]: /rest/api/maps/render/getmaptile
+[Nearby search]: /rest/api/maps/search/getsearchnearby
+[npm package]: https://www.npmjs.com/package/azure-maps-rest
+[NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
+[POI category search]: /rest/api/maps/search/getsearchpoicategory
+[POI search]: /rest/api/maps/search/getsearchpoi
+[Render custom data on a raster map]: how-to-render-custom-data.md
+[Render]: /rest/api/maps/render/getmapimage
+[Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Route]: /rest/api/maps/route
+[Search along route]: /rest/api/maps/search/postsearchalongroute
+[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search]: /rest/api/maps/search
+[Spatial operations]: /rest/api/maps/spatial
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[supported search categories]: supported-search-categories.md
+[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid [Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana [Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows [Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion [Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana-
-[documentation]: how-to-use-services-module.md
-[npm package]: https://www.npmjs.com/package/azure-maps-rest
-[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices
-[NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
+[Time Zone]: /rest/api/maps/timezone
+[Traffic]: /rest/api/maps/traffic
+[Upload pins and path data]: how-to-render-custom-data.md#upload-pins-and-path-data
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Learn the details of how to migrate your Google Maps application with these arti
> [!div class="nextstepaction"] > [Migrate a web app](migrate-from-google-maps-web-app.md)
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[free account]: https://azure.microsoft.com/free/
-[Azure subscription]: https://azure.com
-[Azure portal]: https://portal.azure.com/
-[Manage authentication in Azure Maps]: how-to-manage-authentication.md
[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
-[terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
[Azure Maps pricing page]: https://azure.microsoft.com/pricing/details/azure-maps/
-[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
-[Azure Maps term of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
-[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
-
-[Azure Maps product page]: https://azure.com/maps
[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
-[Azure Maps Web SDK code samples]: https://aka.ms/AzureMapsSamples
-[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
-[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
-[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps product page]: https://azure.com/maps
[Azure Maps Q&A]: https://aka.ms/AzureMapsFeedback-
+[Azure Maps term of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Azure Maps Web SDK code samples]: https://aka.ms/AzureMapsSamples
+[Azure portal]: https://portal.azure.com/
+[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
+[Azure subscription]: https://azure.com
[Azure support options]: https://azure.microsoft.com/support/options
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[free account]: https://azure.microsoft.com/free/
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
azure-maps Power Bi Visual Add 3D Column Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-3d-column-layer.md
Title: Add a 3D column layer to an Azure Maps Power BI visual
-description: In this article, you will learn how to use the 3D column layer in an Azure Maps Power BI visual.
+description: This article demonstrates how to use the 3D column layer in an Azure Maps Power BI visual.
Last updated 11/29/2021
# Add a 3D column layer
-The **3D column layer** is useful for taking data to the next dimension by allowing visualization of location data as 3D cylinders on the map. Similar to the bubble layer, the 3D column chart can easily visualize two metrics at the same time using color and relative height. In order for the columns to have height, a measure needs to be added to the **Size** bucket of the **Fields** pane. If a measure is not provided, columns with no height show as flat squares or circles depending on the **Shape** option.
+The **3D column layer** is useful for taking data to the next dimension by allowing visualization of location data as 3D cylinders on the map. Similar to the bubble layer, the 3D column chart can easily visualize two metrics at the same time using color and relative height. In order for the columns to have height, a measure needs to be added to the **Size** bucket of the **Fields** pane. If a measure isn't provided, columns with no height show as flat squares or circles depending on the **Shape** option.
:::image type="content" source="./media/power-bi-visual/3d-column-layer-styled.png" alt-text="A map displaying point data using the 3D column layer"::: Users can tilt and rotate the map to view your data from different perspectives. The map can be tilted or pitched using one of the following methods. -- Turn on the **Navigation controls** option in the **Map settings** of the **Format** pane. This will add a button to tilt the map.-- Press the right mouse button down and drag the mouse up or down.
+- Turn on the **Navigation controls** option in the **Map settings** of the **Format** pane to add a button that tilts the map.
+- Hold down the right mouse button and drag the mouse up or down.
- Using a touch screen, touch the map with two fingers and drag them up or down together. - With the map focused, hold the **Shift** key, and press the **Up** or **Down arrow** keys. The map can be rotated using one of the following methods. -- Turn on the **Navigation controls** option in the **Map settings** of the **Format** pane. This will add a button to rotate the map.-- Press the right mouse button down and drag the mouse left or right.
+- Turn on the **Navigation controls** option in the **Map settings** of the **Format** pane to add a button that rotates the map.
+- Hold down the right mouse button and drag the mouse left or right.
- Using a touch screen, touch the map with two fingers and rotate. - With the map focused, hold the **Shift** key, and press the **Left** or **Right arrow** keys.
The following are all settings in the **Format** pane that are available in the
| Setting | Description | |-||
-| Column shape | The shape of the 3D column.<br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Box ΓÇô columns rendered as rectangular boxes.<br/>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Cylinder ΓÇô columns rendered as cylinders. |
-| Height | The height of each column. If a field is passed into the **Size** bucket of the **Fields** pane, columns will be scaled relative to this height value. |
+| Column shape | The shape of the 3D column.<br><br>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Box ΓÇô columns rendered as rectangular boxes.<br>&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Cylinder ΓÇô columns rendered as cylinders. |
+| Height | The height of each column. If a field is passed into the **Size** bucket of the **Fields** pane, columns are scaled relative to this height value. |
| Scale height on zoom | Specifies if the height of the columns should scale relative to the zoom level. | | Width | The width of each column. | | Scale width on zoom | Specifies if the width of the columns should scale relative to the zoom level. |
-| Fill color | Color of each column. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section will appear in the **Format** pane. |
+| Fill color | Color of each column. This option is hidden when a field is passed into the **Legend** bucket of the **Fields** pane and a separate **Data colors** section appears in the **Format** pane. |
| Transparency | Transparency of each column. | | Min zoom | Minimum zoom level tiles are available. | | Max zoom | Maximum zoom level tiles are available. |
azure-resource-manager Template Functions Cidr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-cidr.md
Title: Template functions - CIDR
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to manipulate IP addresses and create IP address ranges. Previously updated : 05/16/2023 Last updated : 07/14/2023 # CIDR functions for ARM templates
Last updated 05/16/2023
This article describes the functions for working with CIDR in your Azure Resource Manager template (ARM template). > [!TIP]
-> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [date](../bicep/bicep-functions-date.md) functions.
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [cidr](../bicep/bicep-functions-cidr.md) functions.
## parseCidr
azure-signalr Howto Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md
The examples in this article use the following naming convention, although you c
:::image type="content" alt-text="Screenshot of the button for adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" ::: Enter the following information:
+
| Field | Description | | -- | -- | | **Name** | The name of the shared private endpoint. |
The examples in this article use the following naming convention, although you c
| **Subscription** | The subscription containing your Key Vault. | | **Resource** | Enter the name of your Key Vault resource. | | **Request Message** | Enter "please approve" |-
+
1. Select **Add**. :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
backup Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md
description: This article contains the procedures to back up and recover virtual
Last updated 05/15/2022 --++ # Back up Azure Stack HCI virtual machines with Azure Backup Server
backup Back Up File Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-file-data.md
Title: Back up file data with MABS
description: You can back up file data on server and client computers with MABS. Last updated 08/19/2021--++ # Back up file data with MABS
backup Back Up Hyper V Virtual Machines Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-hyper-v-virtual-machines-mabs.md
Title: Back up Hyper-V virtual machines with MABS
description: This article contains the procedures for backing up and recovery of virtual machines using Microsoft Azure Backup Server (MABS). Last updated 03/01/2023-- ++ # Back up Hyper-V virtual machines with Azure Backup Server
backup Backup Afs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-afs-cli.md
description: Learn how to use Azure CLI to back up Azure file shares in the Reco
Last updated 01/14/2020--++ # Back up Azure file shares with Azure CLI
backup Backup Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-architecture.md
description: Provides an overview of the architecture, components, and processes
Last updated 12/24/2021 --++ # Azure Backup architecture and components
backup Backup Azure About Mars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-about-mars.md
Last updated 11/28/2022 --++ # About the Microsoft Azure Recovery Services (MARS) agent for Azure Backup
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
Last updated 02/11/2022 --++ # Back up an Azure file share by using PowerShell
backup Backup Azure Alternate Dpm Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-alternate-dpm-server.md
Title: Recover data from an Azure Backup Server
description: Recover the data you've protected to a Recovery Services vault from any Azure Backup Server registered to that vault. Last updated 01/24/2023-- ++ # Recover data from Azure Backup Server
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Last updated 07/13/2023 --++ # How to restore Azure VM data in Azure portal
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
Title: Overview of enhanced soft delete for Azure Backup (preview)
description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 06/29/2023 Last updated : 07/14/2023
The key benefits of enhanced soft delete are:
>The soft delete doesn't cost you for 14 days of retention; however, you're charged for the period beyond 14 days. [Learn more](#pricing). - **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups. - **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers.-- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups.
+- **Soft delete across workloads**: Enhanced soft delete applies to all vaulted datasources alike and is supported for Recovery Services vaults and Backup vaults. Enhanced soft delete also applies to operational backups of disks and VM backup snapshots used for instant restores. However, unlike vaulted backups, these snapshots can be directly accessed and deleted before the soft delete period expires. Enhanced soft delete is currently not supported for operational backup for Blobs and Azure Files.
- **Soft delete of recovery points**: This feature allows you to recover data from recovery points that might have been deleted due to making changes in a backup policy or changing the backup policy associated with a backup item. Soft delete of recovery points isn't supported for log recovery points in SQL and SAP HANA workloads. [Learn more](manage-recovery-points.md#impact-of-expired-recovery-points-for-items-in-soft-deleted-state). ## Supported regions
This feature helps to retain these recovery points for an additional duration, a
## Pricing
-There is no retention cost for the default duration of *14* days, after which, it incurs regular backup charges. For soft delete retention *>14* days, the default period applies to the *last 14 days* of the continuous retention configured in soft delete, and then backups are permanently deleted.
+There is no retention cost for the default soft delete duration of *14* days for vaulted backup, after which, it incurs regular backup charges. For soft delete retention *>14* days, the default period applies to the *last 14 days* of the continuous retention configured in soft delete, and then backups are permanently deleted.
For example, you've deleted backups for one of the instances in the vault that has soft delete retention of *60* days. If you want to recover the soft deleted data after *52* days of deletion, the pricing is:
For example, you've deleted backups for one of the instances in the vault that h
- No charges for the last *6* days of soft delete retention.
+However, the above billing rule doesn't apply for soft deleted operational backups of disks and VM backup snapshots, and the billing will continue as per the cost of the resource.
+ ## Soft delete with multi-user authorization You can also use multi-user authorization (MUA) to add an additional layer of protection against disabling soft delete. [Learn more](multi-user-authorization-concept.md).
backup Quick Backup Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
To apply a backup policy to your Azure VMs, follow these steps:
-1. Go to **Backup center** and click **+Backup** from the **Overview** tab.
+1. Go to **Backup center** and select **+Backup** from the **Overview** tab.
![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
-1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
+1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then select **Continue**.
![Screenshot showing Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
The initial backup will run in accordance with the schedule, but you can run it
1. Go to **Backup center** and select the **Backup Instances** menu item. 1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup.
-1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**.
+1. Right-click the relevant row or select the more icon (…), and then select **Backup Now**.
1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**. 1. Monitor the portal notifications. To monitor the job progress, go to **Backup center** > **Backup Jobs** and filter the list for **In progress** jobs.
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Restore types** | Refer to the SAP HANA Note [1642148](https://launchpad.support.sap.com/#/notes/1642148) to learn about the supported restore types | | | **Backup limits** | Up to 8 TB of full backup size per SAP HANA instance (soft limit) | | | **Number of full backups per day** | One scheduled backup. <br><br> Three on-demand backups. <br><br> We recommend not to trigger more than three backups per day. However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. |
+| **HANA deployments** | HANA System Replication (HSR) | |
| **Special configurations** | | SAP HANA + Dynamic Tiering <br> Cloning through LaMa |
backup Sap Hana Database Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md
Azure Backup now supports backup and restore of SAP HANA System Replication (HSR
>[!Note] >- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database.
->- Original Location Recovery (OLR) is currently not supported for HSR. Select **Alternate location** restore, and then select the source VM as your *Host* from the list.
+>- Original Location Recovery (OLR) is currently not supported for HSR. Alternatively, select **Alternate location** restore, and then select the source VM as your *Host* from the list.
>- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported. For information about the supported configurations and scenarios, see the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using [Azure Backup](backup-overview.md).
-You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#possible-scenarios-to-protect-hsr-nodes-on-azure-backup).
+You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#scenarios-to-protect-hsr-nodes-on-azure-backup).
>[!Note] >- The support for **HSR + DR** scenario is currently not available because there is a restriction to have VM and Vault in the same region.
When a failover occurs, the users are replicated to the new primary, but *hdbuse
1. Pass the custom backup user key to the script as a parameter:
- `-bk CUSTOM_BACKUP_KEY_NAME` or `-backup-key CUSTOM_BACKUP_KEY_NAME`
+ ```HDBSQL
+ -bk CUSTOM_BACKUP_KEY_NAME` or `-backup-key CUSTOM_BACKUP_KEY_NAME
+ ```
If the password of this custom backup key expires, the backup and restore operations will fail.
When a failover occurs, the users are replicated to the new primary, but *hdbuse
hdbuserstore set SYSTEMKEY localhost:30013@SYSTEMDB <custom-user> '<some-password>' hdbuserstore set SYSTEMKEY <load balancer host/ip>:30013@SYSTEMDB <custom-user> '<some-password>' ```-
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png" alt-text="Disgram explains the flow to pass the custom backup user key to the script as a parameter." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png":::
-
+
>[!Note] >You can create a custom backup key using the load balancer host/IP instead of local host to use Virtual IP (VIP).
+ >
+ >**Diagram shows the creation of the custom backup key using local host/IP.**
+ >
+ > :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png" alt-text="Disgram explains the flow to pass the custom backup user key to the script as a parameter." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/pass-custom-backup-user-key-to-script-as-parameter-architecture.png":::
+ >
+ >**Diagram shows the creation of the custom backup key using Virtual IP (Load Balancer Frontend IP/Host).**
+ >
+ > :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/create-custom-backup-key-using-virtual-ip.png" alt-text="Disgram explains the flow to create the custom backup key using Virtual IP." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/create-custom-backup-key-using-virtual-ip.png":::
1. Create the same *Custom backup user* (with the same password) and key (in *hdbuserstore*) on both VMs/nodes.
Backups run in accordance with the policy schedule. Learn how to [run an on-dema
You can run an on-demand backup using SAP HANA native clients to local file-system instead of Backint. Learn more how to [manage operations using SAP native clients](sap-hana-database-manage.md#manage-operations-using-sap-hana-native-clients).
-## Possible scenarios to protect HSR nodes on Azure Backup
+## Scenarios to protect HSR nodes on Azure Backup
You can now switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. If youΓÇÖve already configured HSR and protecting only the primary node using Azure Backup, you can modify the configuration to protect both primary and secondary nodes.
You can now switch the protection of SAP HANA database on Azure VM (standalone)
1. (Mandatory) [Run the latest preregistration script on both primary and secondary VM nodes](#run-the-preregistration-script). >[!Note]
- >HSR-based attributes are added to the latest preregistration script -
+ >HSR-based attributes are added to the latest preregistration script.
1. Configure HSR manually or using any clustering tools, such as **pacemaker**,
You can now switch the protection of SAP HANA database on Azure VM (standalone)
1. (Mandatory) [Run the latest preregistration script on both primary and secondary VM nodes](#run-the-preregistration-script). >[!Note]
- >HSR-based attributes are added to the latest preregistration script - //link here )
+ >HSR-based attributes are added to the latest preregistration script.
1. Configure HSR manually or using any clustering tools like pacemaker.
communication-services Email Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/email-insights.md
Title: Azure Communication Services Email Insights Dashboard description: Descriptions of data visualizations available for Email Communications Services via Workbooks-+ - Previously updated : 03/08/2021+ Last updated : 07/10/2023
Inside your Azure Communication Services resource, scroll down on the left nav b
## Email Insights
-The **Email** tab displays delivery status, email size, and email count:
+The Email Insights dashboards give users an intuitive and clear way to navigate their Email usage data. The dashboard is broken into two subsections overview and email performance.
+
+Filters: Filters help you focus on the data that is most relevant to your needs by narrowing down your report to specific criteria such as specific date range, recipient info or location.
+ΓÇ»
+
+### Overview:
+This section provides insights into the effectiveness of email notifications and message performance to enable identification of patterns impacting message delivery. This graph measures the Total messages sent, messages delivered, messages failed, and messages blocked, message viewed, and message clicked, and the size of the message sent & delivered.
+
+#### Overall email health
+
+Overall email insight measures email delivery that assesses effectiveness and efficiency of the email company. This metric enables you to optimize their marketing campaign efforts and improve customer engagement and conversion rate.
+++
+#### Email size
+Email size represents the storage space your emails are taking up, this information will help optimize your mailbox and prevent it from reaching the max size and impact performance.
++
+### Email performance:
+ This section provides insights into the email delivery rate, delivery log indicating information about delivered emailed, bounced, blocked, suppression, failed etc. Analyzing the delivery log will help identify any issues or patterns and enable you to troubleshoot delivery problems and improved customer engagement.
+  
+
+#### Email delivery rates
+Email delivery rates are pivotal to the success of the email marketing campaign, it provides insights into email performance over time. Measuring and monitoring the message delivery performance over extended periods by week or month or specified period.
++
+#### Failure rate
+
+Email failure rate represents the different types of unsuccessful email deliveries including failed, bounced, blocked, and suppressed. It helps measure the following:
+
+** Failed rate: it represents the number of emails that didnΓÇÖt get delivered to the recipient for various reasons.
+** Suppression rate: It represents the number of recipients who expressed interest and agreed to receive email messages from you but has no valid mailbox with that address anymore or opt-out of receiving emails from you.
+** Bounce rate: It represents the number of emails that couldnΓÇÖt be delivered due to recipientΓÇÖs email address is invalid, email server blocked the delivery or invalid domain name.
+** Blocked rate: It represents the number of emails that were not delivered through the recipientΓÇÖs email server or were blocked by spam filter
+ ## More information about workbooks
For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks]
The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar: Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceΓÇÖs Workbooks tab: For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Email Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/email-logs.md
Title: Azure Communication Services email logs description: Learn about logging for Azure Communication Services email.-+ - Previously updated : 03/21/2023+ Last updated : 06/01/2023
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+## Prerequisites
+
+Azure Communications Services provides monitoring and analytics features via [Azure Monitor Logs overview](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+ * Categories of logs and metric data sent to the destinations defined in the setting. The available categories will vary for different resource types.
+ * One or more destinations to send the logs. Current destinations include Log Analytics workspace, Event Hubs, and Azure Storage.
+ * A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to five diagnostic settings.
+ > [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send call automation data to one of these options your survey data will not be stored and will be lost
+The following are instructions for configuring your Azure Monitor resource to start creating logs and metrics for your Communications Services. For detailed documentation about using Diagnostic Settings across all Azure resources, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+> [!NOTE]
+> Under diagnostic setting name please select select ΓÇ£Email Service Delivery Status Update LogsΓÇ¥, "Email Service Send Mail logs", "Email Service User Engagement Logs" to enable the logs for emails
+>
+> :::image type="content" source="..\logs\email-diagnostic-log.png" alt-text="Screenshot of diagnostic settings for Email.":::
## Resource log categories
Communication Services offers the following types of logs that you can enable:
| `Timestamp` | The timestamp (UTC) of when the log was generated. | | `Operation Name` | The operation associated with log record. | | `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. | | `Properties` | Other data applicable to various modes of Communication Services. | | `Record ID` | The unique ID for a given usage record. |
Communication Services offers the following types of logs that you can enable:
## Email Send Mail operational logs
+*Email Send Mail Operational logs* provide valuable insights into API request trends over time. This data helps you discover key email analytics, such as the total number of emails sent, email size, and number of emails with attachments. This information can be quickly analyzed in near-real-time and visualized in a user-friendly way to help drive better decision-making.
+ | Property | Description | | -- | | | `TimeGenerated` | The timestamp (UTC) of when the log was generated. | | `Location` | The region where the operation was processed. |
-| `OperationName` | The operation associated with log record. |
+| `OperationName` | The operation associated with the log record. |
| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| `Size` | Represents the total size in megabytes of the email body, subject, headers and attachments. |
+| `Size` | Represents the total size of the email body, subject, headers and attachments in megabytes. |
| `ToRecipientsCount` | The total # of unique email addresses on the To line. | | `CcRecipientsCount` | The total # of unique email addresses on the Cc line. | | `BccRecipientsCount` | The total # of unique email addresses on the Bcc line. |
-| `UniqueRecipientsCount` | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
+| `UniqueRecipientsCount` | This is the deduplicated total recipient count for the To, Cc, and Bcc address fields. |
| `AttachmentsCount` | The total # of attachments. |
+**Samples**
+
+``` json
+{
+ "OperationType":"SendMail",
+ "OperationCategory":"EmailSendMailOperational",
+ "Size":0.026019,
+ "ToRecipientsCount":2,
+ "CcRecipientsCount":3,
+ "BccRecipientsCount":1,
+ "UniqueRecipientsCount":6,
+ "AttachmentsCount":0
+}
+```
+ ## Email Status Update operational logs
+*Email status update operational logs* provide in-depth insights into message and recipient level delivery status updates on your sendmail API requests. These logs offer message-specific details, such as the time of delivery, as well as recipient-level details, such as email addresses and delivery status updates. By tracking these logs, you can ensure full visibility into your email delivery process, quickly identifying any issues that may arise and taking corrective action as necessary.
+ | Property | Description | | -- | | | `TimeGenerated` | The timestamp (UTC) of when the log was generated. | | `Location` | The region where the operation was processed. |
-| `OperationName` | The operation associated with log record. |
+| `OperationName` | The operation associated with the log record. |
| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. | | `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. | | `DeliveryStatus` | The terminal status of the message. |
+| `SmtpStatusCode` | SMTP status code returned from the recipient email server in response to a send mail request.
+| `EnhancedSmtpStatusCode` | Enhanced SMTP status code returned from the recipient email server.
+| `SenderDomain` | The domain portion of the SenderAddress used in sending emails.
+| `SenderUsername` | The username portion of the SenderAddress used in sending emails.
+| `IsHardBounce` | Signifies whether a delivery failure was due to a permanent or temporary issue. IsHardBounce == true means a permanent mailbox issue preventing emails from being delivered.
+
+**Samples**
+
+``` json
+{
+ "OperationType":"DeliveryStatusUpdate",
+ "OperationCategory":"EmailStatusUpdateOperational",
+ "RecipientId":"user@email.com",
+ "DeliveryStatus":"Delivered",
+ "SenderDomain":"contoso.com",
+ "SenderUsername":"donotreply",
+ "IsHardBounce":false
+}
+```
## Email User Engagement operational logs
+*Email user engagement operational logs* provide insights into email engagement trends for your email system. This data helps you track and analyze key email metrics such as open rates, click-through rates, and unsubscribe rates. These logs can be stored and analyzed, allowing you to gain deeper insights into your email system's performance, and adapt your strategy accordingly. Overall, Email User Engagement operational logs provide a powerful tool for improving your email system's performance, proactively measuring, and optimizing your email campaigns, and improving user engagement over time.
+ | Property | Description | | -- | | | `TimeGenerated` | The timestamp (UTC) of when the log was generated. | | `Location` | The region where the operation was processed. |
-| `OperationName` | The operation associated with log record. |
+| `OperationName` | The operation associated with the log record. |
| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. | | `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. | | `EngagementType` | The type of user engagement being tracked. | | `EngagementContext` | The context represents what the user interacted with. | | `UserAgent` | The user agent string from the client. |+
+**Samples**
+
+``` json
+{
+ "OperationType": "UserEngagementUpdate",
+ "OperationCategory": "EmailUserEngagementOperational",
+ "EngagementType": "View",
+ "UserAgent": "Mozilla/5.0"
+}
+
+{
+ "OperationType":"UserEngagementUpdate",
+ "OperationCategory":"EmailUserEngagementOperational",
+ "EngagementType":"Click",
+ "EngagementContext":"https://www.contoso.com/support?id=12345",
+ "UserAgent":"Mozilla/5.0"
+}
+```
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
User access tokens are generated using the Identity SDK and are associated with
## Using identity for monitoring and metrics
-The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/query-call-logs.md), and [metrics](../concepts/metrics.md) available to you.
+The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/query-call-logs.md), and [metrics](../concepts/authentication.md) available to you.
## Next steps
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
description: Learn about Teams interoperability with Azure Communication Service
+ Last updated 02/22/2023
[!INCLUDE [Private Preview Notice](../../includes/private-preview-include.md)]
-Businesses are looking for innovative ways to increase the efficiency of their customer service operations. Azure Communication Services Call Automation provides developers the ability to build programmable customer interactions using real-time event triggers to perform actions based on custom business logic. For example, with support for interoperability with Microsoft Teams, developers can use Call Automation APIs to add subject matter experts (SMEs). These SMEs, who use Microsoft Teams, can be added to an existing customer service call to provide to resolve a customer issue.
+Businesses are looking for innovative ways to increase the efficiency of their customer service operations. Azure Communication Services Call Automation provides developers the ability to build programmable customer interactions using real-time event triggers to perform actions based on custom business logic. For example, with support for interoperability with Microsoft Teams, developers can use Call Automation APIs to add subject matter experts (SMEs). These SMEs, who use Microsoft Teams, can be added to an existing customer service call to provide expert advice and help resolve a customer issue.
This interoperability with Microsoft Teams over VoIP makes it easy for developers to implement per-region multi-tenant trunks that maximize value and reduce telephony infrastructure overhead. Each new tenant will be able to use this setup in a few minutes after Microsoft Teams admin has granted necessary permissions to the Azure Communication Services resource.
The dataflow diagram depicts a canonical scenario where a Teams user is added to
[ ![Diagram of calling flow for a customer service with Microsoft Teams and Call Automation.](./media/call-automation-teams-interop.png)](./media/call-automation-teams-interop.png#lightbox) 1. Customer is on an ongoing call with a Contact Center customer service agent.
-1. the call, the customer service agent needs expert help from one of the domain experts part of an engineering team. The agent is able to identify a knowledge worker who is available on Teams (presence via Graph APIs) and tries to add them to the call.
+1. During the call, the customer service agent needs expert help from one of the domain experts part of an engineering team. The agent is able to identify a knowledge worker who is available on Teams (presence via Graph APIs) and tries to add them to the call.
1. Contoso Contact CenterΓÇÖs SBC is already configured with ACS Direct Routing where this add participant request is processed. 1. Contoso Contact Center provider has implemented a web service, using ACS Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request. 1. With Teams interop built into ACS Call Automation, ACS then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call.
-1. Once the Teams has provided their expertise, they leave the call. The customer service agent and customer continue wrap up their conversation.
+1. Once the Teams user has provided their expertise, they leave the call. The customer service agent and customer continue wrap up their conversation.
## Capabilities
The following list presents the set of features that are currently available in
## Next steps > [!div class="nextstepaction"]
-> [Get started with Adding a Microsoft Teams user to an ongoing call using Call Automation](./../../how-tos/call-automation/teams-interop-call-automation.md)
+> [Get started with adding a Microsoft Teams user to an ongoing call using Call Automation](./../../how-tos/call-automation/teams-interop-call-automation.md)
Here are some articles of interest to you: - Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
communication-services Email Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email-metrics.md
+
+ Title: Email metric definitions for Azure Communication Services
+
+description: This document covers definitions of acs email metrics available in the Azure portal.
+++++ Last updated : 06/30/2023++++
+# Email metrics overview
+
+Azure Communication Services currently provides metrics for all Azure communication services' primitives. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that email requests emit.
+
+## Where to find metrics
+
+Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics tab under your Communication Services resource. You can also create permanent dashboards using the workbooks tab under your Communication Services resource.
+
+## Metric definitions
+
+All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`.
+
+More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/essentials/metrics-charts.md#aggregation)
+
+- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway.
+- **Status Code** - The status code response sent after the request.
+- **StatusSubClass** - The status code series sent after the response.
+
+### Email Service Delivery Status Updates
+The `Email Service Delivery Status Updates` metric lets the email sender track SMTP and Enhanced SMTP status codes and get an idea of how many hard bounces they are encountering.
+
+The following dimensions are available on the `Email Service Delivery Status Updates` metric:
+
+| Dimension | Description |
+| -- | - |
+| Result | High level status of the message delivery: Success, Failure. |
+| MessageStatus | Terminal state of the Delivered, Failed, Suppressed. Emails are suppressed when a user sends an email to an email address that is known not to exist. Sending emails to addresses that do not exist trigger a hard bounce. |
+| IsHardBounce | True when a message delivery failed due to a hard bounce or if an item was suppressed due to a previous hard bounce. |
+| SenderDomain | The domain portion of the senders email address. |
+| SmtpStatusCode | Smpt error code from for failed deliveries. |
+| EnhancedSmtpStatusCode | The EnhancedSmtpStatusCode status code will be emitted if it is available. This status code provides additional details not available with the SmtpStatusCode. |
++
+### Email Service API requests
+
+The following operations are available for the `Email Service API Requests` metric. These standard dimensions are supported: StatusCode, StatusCodeClass, StatusCodeReason and Operation.
+
+| Operation | Description |
+| -- | - |
+| SendMail | Email Send API. |
+| GetMessageStatus | Get the delivery status of a messageId. |
++
+### Email User Engagement
+
+The `Email Service User Engagement` metric is supported with HTML type emails and must be opted into on your Domains resource. These dimensions are available for `Email Service User Engagement` metrics:
+
+| Dimension | Description |
+| -- | - |
+| EngagementType | Type of interaction performed by the receiver of the email. |
++
+## Next steps
+
+- Learn more about [Data Platform Metrics](../../azure-monitor/essentials/data-platform-metrics.md)
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
Title: Metric definitions for Azure Communication Service
+ Title: Metric definitions for Azure Communication Services
description: This document covers definitions of metrics available in the Azure portal.--++ - Previously updated : 06/30/2021+ Last updated : 06/30/2023 # Metrics overview
-Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that Chat and SMS requests emit.
+Azure Communication Services currently provides metrics for all Azure communication services' primitives. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that email requests emit.
## Where to find metrics
Primitives in Azure Communication Services emit metrics for API requests. These
## Metric definitions
-Today there are various types of requests that are represented within Communication Services metrics: **Chat API requests** , **SMS API requests** , **Authentication API requests**, **Call Automation API requests** and **Network Traversal API requests**.
- All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`. More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/essentials/metrics-charts.md#aggregation)
More information on supported aggregation types and time series aggregations can
- **Status Code** - The status code response sent after the request. - **StatusSubClass** - The status code series sent after the response. -
-### Chat API request metric operations
-
-The following operations are available on Chat API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| GetChatMessage | Gets a message by message ID. |
-| ListChatMessages | Gets a list of chat messages from a thread. |
-| SendChatMessage | Sends a chat message to a thread. |
-| UpdateChatMessage | Updates a chat message. |
-| DeleteChatMessage | Deletes a chat message. |
-| GetChatThread | Gets a chat thread. |
-| ListChatThreads | Gets the list of chat threads of a user. |
-| UpdateChatThread | Updates a chat thread's properties. |
-| CreateChatThread | Creates a chat thread. |
-| DeleteChatThread | Deletes a thread. |
-| GetReadReceipts | Gets read receipts for a thread. |
-| SendReadReceipt | Sends a read receipt event to a thread, on behalf of a user. |
-| SendTypingIndicator | Posts a typing event to a thread, on behalf of a user. |
-| ListChatThreadParticipants | Gets the members of a thread. |
-| AddChatThreadParticipants | Adds thread members to a thread. If members already exist, no change occurs. |
-| RemoveChatThreadParticipant | Remove a member from a thread. |
--
-If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response.
-
-### SMS API requests
-
-The following operations are available on SMS API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| SMSMessageSent | Sends an SMS message. |
-| SMSDeliveryReportsReceived | Gets SMS Delivery Reports |
-| SMSMessagesReceived | Gets SMS messages. |
---
-### Authentication API requests
-
-The following operations are available on Authentication API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| CreateIdentity | Creates an identity representing a single user. |
-| DeleteIdentity | Deletes an identity. |
-| CreateToken | Creates an access token. |
-| RevokeToken | Revokes all access tokens created for an identity before a time given. |
-| ExchangeTeamsUserAccessToken | Exchange an Azure Active Directory (Azure AD) access token of a Teams user for a new Communication Identity access token with a matching expiration time.|
--
-### Call Automation API requests
-
-The following operations are available on Call Automation API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| Create Call | Create an outbound call to user.
-| Answer Call | Answer an inbound call. |
-| Redirect Call | Redirect an inbound call to another user. |
-| Reject Call | Reject an inbound call. |
-| Transfer Call To Participant | Transfer 1:1 call to another user. |
-| Play | Play audio to call participants. |
-| PlayPrompt | Play a prompt to users as part of the Recognize action. |
-| Recognize | Recognize user input from call participants. |
-| Add Participants | Add a participant to a call. |
-| Remove Participants | Remove a participant from a call. |
-| HangUp Call | Hang up your call leg. |
-| Terminate Call | End the call for all participants. |
-| Get Call | Get details about a call. |
-| Get Participant | Get details on a call participant. |
-| Get Participants | Get all participants in a call. |
-| Delete Call | Delete a call. |
-| Cancel All Media Operations | Cancel all ongoing or queued media operations in a call. |
--
-### Network Traversal API requests
-
-The following operations are available on Network Traversal API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| IssueRelayConfiguration | Issue configuration for an STUN/TURN server. |
--
-### Rooms API requests
-
-The following operations are available on Rooms API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| CreateRoom | Creates a Room. |
-| DeleteRoom | Deletes a Room. |
-| GetRoom | Gets a Room by Room ID. |
-| PatchRoom | Updates a Room by Room ID. |
-| ListRooms | Lists all the Rooms for an ACS Resource. |
-| AddParticipants | Adds participants to a Room.|
-| RemoveParticipants | Removes participants from a Room. |
-| GetParticipants | Gets list of participants for a Room. |
-| UpdateParticipants | Updates list of participants for a Room. |
-- ## Next steps - Learn more about [Data Platform Metrics](../../azure-monitor/essentials/data-platform-metrics.md)
communication-services Manage Call Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md
can provide. In this scenario, you could utilize our [Video constraints](video-c
## Implement existing quality and reliability capabilities before deployment
-Before you launch and scale your Azure Communication Services calling
+> [!Note]
+> We recommend you use our easy to implement samples since they are already optimized to give your users the best call quality. Please see: [Samples](../../overview.md#samples)
+
+If our calling samples don't meet your needs or you decide to customize your solution please ensure you understand and implement the following capabilities in your custom calling scenarios.
+
+Before you launch and scale your customized Azure Communication Services calling
solution, implement the following capabilities to support a high quality calling experience. These tools help prevent common quality and reliability calling issues from happening and diagnose issues if they occur. Keep in mind, some of these call data aren't created or stored unless you implement them. The following sections detail the tools to implement at different phases of a call:
communication-services Teams Interop Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md
description: ProvIDes a how-to for adding a Microsoft Teams user to a call with
+ Last updated 03/28/2023
[!INCLUDE [Private Preview Notice](../../includes/private-preview-include.md)]
-In this quickstart, we use the Azure Communication Services Call Automation APIs to add, remove and transfer to a Teams user.
+In this quickstart, we use the Azure Communication Services Call Automation APIs to add, remove and transfer call to a Teams user.
You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. To access to the specific Teams Interop functionality for Call Automation, submit your Teams Tenant IDs and Azure Communication Services Resource IDs by filling this form ΓÇô https://aka.ms/acs-ca-teams-tap. You need to fill the form every time you need a new tenant ID and new resource ID allow-listed.
If you want to clean up and remove a Communication Services subscription, you ca
- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features. - Learn more about capabilities of [Teams Interoperability support with ACS Call Automation](../../concepts/call-automation/call-automation-teams-interop.md) - Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.-- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The *Is Configurable* column in the following tables denotes a feature maximum m
| Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--| | Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region. |
-| Environments | GLobal | Up to 20 | Yes | Limit up to 20 environments per subscription accross all regions |
+| Environments | Global | Up to 20 | Yes | Limit up to 20 environments per subscription accross all regions |
| Container Apps | Environment | Unlimited | n/a | | | Revisions | Container app | 100 | No | | | Replicas | Revision | 300 | Yes | |
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Title: What is Azure Cosmos DB analytical store? description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store. ++ Last updated 04/18/2023- - # What is Azure Cosmos DB analytical store?+ [!INCLUDE[NoSQL, MongoDB, Gremlin](includes/appliesto-nosql-mongodb-gremlin.md)] Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads.
Analytical store relies on Azure Storage and offers the following protection aga
* By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year. * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
-For more information about Azure Storage durability, click [here](https://learn.microsoft.com/azure/storage/common/storage-redundancy).
+For more information about Azure Storage durability, click [here](/azure/storage/common/storage-redundancy).
## Backup
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
Title: Azure Cosmos DB dedicated gateway description: A dedicated gateway is compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it routes requests and caches data. ++ Last updated 08/29/2022-- # Azure Cosmos DB dedicated gateway - Overview+ [!INCLUDE[NoSQL](includes/appliesto-nosql.md)] A dedicated gateway is server-side compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it both routes requests and caches data. Like provisioned throughput, the dedicated gateway is billed hourly.
The dedicated gateway is available in the following sizes. The integrated cache
There are many different ways to provision a dedicated gateway: - [Provision a dedicated gateway using the Azure portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)-- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2022-11-15/service/create#sqldedicatedgatewayservicecreate)
+- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db/)
- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
cosmos-db Change Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-streams.md
while (!cursor.isExhausted()) {
# [C#](#tab/csharp) ```csharp
+var collection = new MongoClient("<connection-string>")
+ .GetDatabase("<database-name>")
+ .GetCollection<BsonDocument>("<collection-name>");
+ var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>()
- .Match(change => change.OperationType == ChangeStreamOperationType.Insert || change.OperationType == ChangeStreamOperationType.Update || change.OperationType == ChangeStreamOperationType.Replace)
+ .Match(change =>
+ change.OperationType == ChangeStreamOperationType.Insert ||
+ change.OperationType == ChangeStreamOperationType.Update ||
+ change.OperationType == ChangeStreamOperationType.Replace
+ )
.AppendStage<ChangeStreamDocument<BsonDocument>, ChangeStreamDocument<BsonDocument>, BsonDocument>(
- "{ $project: { '_id': 1, 'fullDocument': 1, 'ns': 1, 'documentKey': 1 }}");
-
-var options = new ChangeStreamOptions{
- FullDocument = ChangeStreamFullDocumentOption.UpdateLookup
- };
-
-var enumerator = coll.Watch(pipeline, options).ToEnumerable().GetEnumerator();
-
-while (enumerator.MoveNext()){
- Console.WriteLine(enumerator.Current);
- }
-
-enumerator.Dispose();
+ @"{
+ $project: {
+ '_id': 1,
+ 'fullDocument': 1,
+ 'ns': 1,
+ 'documentKey': 1
+ }
+ }"
+ );
+
+ChangeStreamOptions options = new ()
+{
+ FullDocument = ChangeStreamFullDocumentOption.UpdateLookup
+};
+
+using IChangeStreamCursor<BsonDocument> enumerator = collection.Watch(
+ pipeline,
+ options
+);
+
+Console.WriteLine("Waiting for changes...");
+while (enumerator.MoveNext())
+{
+ IEnumerable<BsonDocument> changes = enumerator.Current;
+ foreach(BsonDocument change in changes)
+ {
+ Console.WriteLine(change);
+ }
+}
``` # [Java](#tab/java)
cosmos-db Partners Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration.md
Last updated 08/26/2021
# Azure Cosmos DB NoSQL migration and application development partners+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] From NoSQL migration to application development, you can choose from a variety of experienced systems integrator partners and tools to support your Azure Cosmos DB solutions. This article lists the partners who have solutions or services that use Azure Cosmos DB. This list changes over time, Microsoft is not responsible to any changes or updates made to the solutions of these partners.
From NoSQL migration to application development, you can choose from a variety o
| [Solidsoft Reply](https://www.reply.com/solidsoft-reply/) | NoSQL migration | Croatia, Sweden, Denmark, Ireland, Bulgaria, Slovenia, Cyprus, Malta, Lithuania, the Czech Republic, Iceland, and Switzerland and Liechtenstein| | [Spanish Point Technologies](https://www.spanishpoint.ie/) | NoSQL migration| Ireland| | [Syone](https://www.syone.com/) | NoSQL migration| Portugal|
-|[Tallan](https://www.tallan.com/) | App development | USA |
+|[EY](https://www.ey.com/alliances/microsoft) | App development | USA |
| [TCS](https://www.tcs.com/) | App development | USA, UK, France, Malaysia, Denmark, Norway, Sweden| |[VTeamLabs](https://www.vteamlabs.com/) | Personalization, Retail (inventory), IoT, Gaming, Operational Analytics (Spark), Serverless architecture, NoSQL Migration, App development | USA | | [White Duck GmbH](https://whiteduck.de/en/) |New app development, App Backend, Storage for document-based data| Germany |
cosmos-db Howto Ingest Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-stream-analytics.md
Title: Real-time data ingestion with Azure Stream Analytics - Azure Cosmos DB for PostgreSQL description: See how to transform and ingest streaming data from Azure Cosmos DB for PostgreSQL by using Azure Stream Analytics.--++
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
description: This article describes the fields in the usage data files. Previously updated : 04/04/2023 Last updated : 07/14/2023
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| BillingAccountId┬╣ | All | Unique identifier for the root billing account. | | BillingAccountName | All | Name of the billing account. | | BillingCurrency | All | Currency associated with the billing account. |
+| BillingCurrencyCode | All | See BillingCurrency. |
| BillingPeriod | EA, pay-as-you-go | The billing period of the charge. | | BillingPeriodEndDate | All | The end date of the billing period. | | BillingPeriodStartDate | All | The start date of the billing period. |
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| CostCenter┬╣ | EA, MCA | The cost center defined for the subscription for tracking costs (only available in open billing periods for MCA accounts). | | Cost | EA, pay-as-you-go | See CostInBillingCurrency. | | CostAllocationRuleName | EA, MCA | Name of the Cost Allocation rule that's applicable to the record. |
-| CostInBillingCurrency | MCA | Cost of the charge in the billing currency before credits or taxes. |
+| CostInBillingCurrency | EA, MCA | Cost of the charge in the billing currency before credits or taxes. |
| CostInPricingCurrency | MCA | Cost of the charge in the pricing currency before credits or taxes. | | Currency | EA, pay-as-you-go | See `BillingCurrency`. | | CustomerName | MPA | Name of the Azure Active Directory tenant for the customer's subscription. |
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 05/26/2023 Last updated : 07/07/2023
This article explains the common tasks that an Enterprise Agreement (EA) adminis
> > This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment. >
-> As of April 24, 2023 EA customers won't be able to manage their Azure Government EA enrollments from [Azure portal](https://portal.azure.com) instead they can manage it from [Azure Government portal](https://portal.azure.us).
+> As of August 14, 2023 EA customers won't be able to manage their Azure Government EA enrollments from [Azure portal](https://portal.azure.com) instead they can manage it from [Azure Government portal](https://portal.azure.us).
## Manage your enrollment
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
If you still want to transfer files such as CSV and Excel files with different s
- **Option-1**: You need to manually merge the schema of different files to get the full schema. For example, file_1 has columns `c_1`, `c_2`, `c_3` while file_2 has columns `c_3`, `c_4`, ... `c_10`, so the merged and full schema is `c_1`, `c_2`, ... `c_10`. Then make other files also have the same schema even though it does not have data, for example, file_x with sheet "SHEET_1" only has columns `c_1`, `c_2`, `c_3`, `c_4`, please add columns `c_5`, `c_6`, ... `c_10` in the sheet too, and then it can work. - **Option-2**: Use **range (for example, A1:G100) + firstRowAsHeader=false**, and then it can load data from all Excel files even though the column name and count is different.
-## Delta format
-### The sink does not support the schema drift with upsert or update
-#### Symptoms
-You may face the issue that the delta sink in mapping data flows does not support schema drift with upsert/update. The problem is that the schema drift does not work when the delta is the target in a mapping data flow and user configure an update/upsert. 
-If a column is added to the source after an "initial" load to the delta, the subsequent jobs just fail with an error that it cannot find the new column, and this happens when you upsert/update with the alter row. It seems to work for inserts only.
-#### Error message
-`DF-SYS-01 at Sink 'SnkDeltaLake': org.apache.spark.sql.AnalysisException: cannot resolve target.BICC_RV in UPDATE clause given columns target. `
-#### Cause
-This is an issue for delta format because of the limitation of io delta library used in the data flow runtime. This issue is still in fixing.
-
-#### Recommendation
-To solve this problem, you need to update the schema firstly and then write the data. You can follow the steps below: <br/>
-1. Create one data flow that includes an insert-only delta sink with the merge schema option to update the schema. 
-1. After Step 1, use delete/upsert/update to modify the target sink without changing the schema. <br/>
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
4. Select + **New** under **Managed private endpoints**. 5. Select the **Private Link Service** tile from the list and select **Continue**. 6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
-7. Add FQDN of your target on-premises SQL Server.
+7. Add <FQDN>,<port> of your target on-premises SQL Server. By default, port is 1433.
:::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings.":::
data-manager-for-agri How To Set Up Sensor As Customer And Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensor-as-customer-and-partner.md
Title: Push and consume sensor data in Data Manager for Agriculture description: Learn how to push sensor data as a provider and egress it as a customer-+
Use [IoT Hub Device SDKs](/azure/iot-hub/iot-hub-devguide-sdks#azure-iot-hub-dev
For all sensor telemetry events, "timestamp" is a mandatory property and has to be in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ).
-You're now all set to start pushing sensor data for all sensors using the respective connection string provided for each sensor. However, sensor data should be sent in a JSON format as defined by Data Manager for Agriculture. Refer to the telemetry schema that follows:
+You're now all set to start pushing sensor data for all sensors using the respective connection string provided for each sensor. However, sensor data should be sent in the format defined in the sensor data model created in Step 3. Refer to an example of the telemetry schema that follows:
```json {
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
As a part of the migration, source code management system specific recommendatio
Customers that rely on the `resourceID` to query DevOps recommendation data will be affected. For example, Azure Resource Graph queries, workbooks queries, API calls to Microsoft Defender for Cloud.
-Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time.
+Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time.
+
+Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations.
The recommendations page's experience will have minimal impact and deprecated assessments may continue to show for a maximum of 14 days if new scan results aren't submitted.
deployment-environments How To Install Devcenter Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md
Title: Install the devcenter Azure CLI extension
-description: Learn how to install the Azure CLI and the Azure Deployment Environments CLI extension so you can create Deployment Environments resources from the command line.
+description: Learn how to install the Azure CLI and the Deployment Environments CLI extension so you can create Deployment Environments resources from the command line.
Last updated 04/25/2023
-Customer intent: As a dev infra admin, I want to install the Deployment Environments CLI extension so that I can create Deployment Environments resources from the command line.
+Customer intent: As a dev infra admin, I want to install the devcenter extension so that I can create Deployment Environments resources from the command line.
# Azure Deployment Environments Azure CLI extension
-In addition to the Azure admin portal and the developer portal, you can use the Deployment Environments Azure CLI extension to create resources. Azure Deployment Environments and Microsoft Dev Box use the same Azure CLI extension, which is called `devcenter`.
+In addition to the Azure admin portal and the developer portal, you can use the Deployment Environments Azure CLI extension to create resources. Azure Deployment Environments and Microsoft Dev Box use the same Azure CLI extension, which is called *devcenter*.
-## Install the Deployment Environments CLI extension
+## Install the devcenter extension
-To install the Deployment Environments Azure CLI extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the Deployment Environments CLI extension.
+To install the devcenter extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the devcenter extension.
1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
-1. Install the Deployment Environments CLI extension
+1. Install the devcenter extension
``` azurecli az extension add --name devcenter ```
-1. Check that the `devcenter` extension is installed
+1. Check that the devcenter extension is installed
``` azurecli az extension list ```
-### Update the Deployment Environments CLI extension
-You can update the Deployment Environments CLI extension if you already have it installed.
+### Update the devcenter extension
+You can update the devcenter extension if you already have it installed.
To update a version of the extension that's installed ``` azurecli az extension update --name devcenter ```
-### Remove the Deployment Environments CLI extension
+### Remove the devcenter extension
To remove the extension, use the following command ```azurecli az extension remove --name devcenter ```
-## Get started with the Deployment Environments CLI extension
+## Get started with the devcenter extension
-You might find the following commands useful as you work with the Deployment Environments CLI extension.
+You might find the following commands useful as you work with the devcenter extension.
1. Sign in to Azure CLI with your work account.
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
Title: Install the Microsoft Dev Box Azure CLI extension
-description: Learn how to install the Azure CLI and the Microsoft Dev Box CLI extension so you can create Dev Box resources from the command line.
+description: Learn how to create Dev Box resources from the command line. Install the Azure CLI and the devcenter extension to gain access to Dev Box commands.
Last updated 04/25/2023
Customer intent: As a dev infra admin, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
-# Configure Microsoft Dev Box from the command-line with the Azure CLI extension
+# Configure Microsoft Dev Box from the command-line with the Azure CLI
-In addition to the Azure admin portal and the developer portal, you can use the Dev Box Azure CLI extension to create resources. Microsoft Dev Box and Azure Deployment Environments use the same Azure CLI extension, which is called `devcenter`.
+In addition to the Azure admin portal and the developer portal, you can use the Dev Box Azure CLI extension to create resources. Microsoft Dev Box and Azure Deployment Environments use the same Azure CLI extension, which is called *devcenter*.
-## Install the Dev Box CLI extension
+## Install the devcenter extension
-To install the Dev Box Azure CLI extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the Dev Box CLI extension.
+To install the devcenter extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the devcenter extension.
1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
-1. Install the Dev Box CLI extension
+1. Install the devcenter extension
``` azurecli az extension add --name devcenter ```
-1. Check that the `devcenter` extension is installed
+1. Check that the devcenter extension is installed
``` azurecli az extension list ```
-### Update the Dev Box CLI extension
-You can update the Dev Box CLI extension if you already have it installed.
+### Update the devcenter extension
+You can update the devcenter extension if you already have it installed.
To update a version of the extension that's installed ``` azurecli az extension update --name devcenter ```
-### Remove the Dev Box CLI extension
+### Remove the devcenter extension
To remove the extension, use the following command ```azurecli az extension remove --name devcenter ```
-## Get started with the Dev Box CLI extension
+## Get started with the devcenter extension
-You might find the following commands useful as you work with the Dev Box CLI extension.
+You might find the following commands useful as you work with Dev Box.
1. Sign in to Azure CLI with your work account.
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | | | | | | | |
expressroute How To Configure Custom Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-custom-bgp-communities.md
Title: 'Configure custom BGP communities for Azure ExpressRoute private peering (Preview)'
+ Title: 'Configure custom BGP communities for Azure ExpressRoute private peering'
description: Learn how to apply or update BGP community value for a new or an existing virtual network.
Last updated 12/27/2022
-# Configure custom BGP communities for Azure ExpressRoute private peering (Preview)
+# Configure custom BGP communities for Azure ExpressRoute private peering
BGP communities are groupings of IP prefixes tagged with a community value. This value can be used to make routing decisions on the router's infrastructure. You can apply filters or specify routing preferences for traffic sent to your on-premises from Azure with BGP community tags. This article explains how to apply a custom BGP community value for your virtual networks using Azure PowerShell. Once configured, you can view the regional BGP community value and the custom community value of your virtual network. This value will be used for outbound traffic sent over ExpressRoute when originating from that virtual network.
global-secure-access How To Install Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-install-windows-client.md
Organizations must then create a system variable named `grpc_proxy` with a value
Since UDP traffic isn't supported in the current preview, organizations that plan to tunnel their Exchange Online traffic should disable the QUIC protocol (443 UDP). Administrators can disable this protocol triggering clients to fall back to HTTPS (443 TCP) with the following Windows Firewall rule: ```powershell
-@New-NetFirewallRule -DisplayName "Block QUIC for Exchange Online" -Direction Outbound -Action Block -Protocol UDP -RemoteAddress 13.107.6.152/31,13.107.18.10/31,13.107.128.0/22,23.103.160.0/20,40.96.0.0/13,40.104.0.0/15,52.96.0.0/14,131.253.33.215/32,132.245.0.0/16,150.171.32.0/22,204.79.197.215/32,6.6.0.0/16 -RemotePort 443
+New-NetFirewallRule -DisplayName "Block QUIC for Exchange Online" -Direction Outbound -Action Block -Protocol UDP -RemoteAddress 13.107.6.152/31,13.107.18.10/31,13.107.128.0/22,23.103.160.0/20,40.96.0.0/13,40.104.0.0/15,52.96.0.0/14,131.253.33.215/32,132.245.0.0/16,150.171.32.0/22,204.79.197.215/32,6.6.0.0/16 -RemotePort 443
``` This list of IPv4 addresses is based on the [Office 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges#exchange-online) and the IPv4 block used by the Global Secure Access Client.
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
The FHIR service supports $import operation that allows you to import data into
* Incremental mode is optimized to load data into FHIR server periodically and doesn't block writes via API. It also allows you to load lastUpdated and versionId from resource Meta (if present in resource JSON). > [!IMPORTANT]
-> Incremental mode capability is currently in preview.
+> Incremental mode capability is currently in preview and offered free of charge. With General Availability, use of Incremental import will incur charges.
> Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities.
->
> For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). In this document we go over the three steps used in configuring import settings on the FHIR service:
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
Title: Configure devices for network proxies - Azure IoT Edge | Microsoft Docs
-description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server.
+ Title: Configure devices for network proxies for Azure IoT Edge
+description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server.
Previously updated : 11/1/2022 Last updated : 07/14/2023
Enter the following text, replacing **\<proxy URL>** with your proxy server addr
```ini [Service]
-Environment=https_proxy=<proxy URL>
+Environment="https_proxy=<proxy URL>"
``` Starting in version 1.2, IoT Edge uses the IoT identity service to handle device provisioning with IoT Hub or IoT Hub Device Provisioning Service. Open an editor in the terminal to configure the IoT identity service daemon.
Enter the following text, replacing **\<proxy URL>** with your proxy server addr
```ini [Service]
-Environment=https_proxy=<proxy URL>
+Environment="https_proxy=<proxy URL>"
``` Refresh the service manager to pick up the new configurations.
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
description: Use Visual Studio to develop a custom IoT Edge module and deploy to
Previously updated : 10/24/2022 Last updated : 07/13/2023 zone_pivot_groups: iotedge-dev
This article assumes that you use a machine running Windows as your development
* Install the Azure IoT Edge Tools either from the Marketplace or from Visual Studio: * Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) from the Visual Studio Marketplace.
- * Or, in Visual Studio go to **Extensions > Manage Extensions**. The **Manage Extensions** popup will open. In the search box in the upper right, add the text **Azure IoT Edge Tools for VS 2022**, then select **Download**. Close the popup when finished.
+ * Or, in Visual Studio go to **Extensions > Manage Extensions**. The **Manage Extensions** popup opens. In the search box in the upper right, add the text **Azure IoT Edge Tools for VS 2022**, then select **Download**. Close the popup when finished.
You may have to restart Visual Studio.
This article assumes that you use a machine running Windows as your development
* Install the [Azure CLI](/cli/azure/install-azure-cli).
-* To test your module on a device, you'll need an active IoT Hub with at least one IoT Edge device. To create an IoT Edge device for testing you can create one in the Azure portal or with the CLI:
+* To test your module on a device, you need an active IoT Hub with at least one IoT Edge device. To create an IoT Edge device for testing, you can create one in the Azure portal or with the CLI:
* Creating one in the [Azure portal](https://portal.azure.com/) is the quickest. From the Azure portal, go to your IoT Hub resource. Select **Devices** under the **Device management** menu and then select **Add Device**.
This article assumes that you use a machine running Windows as your development
Finally, confirm that your new device exists in your IoT Hub, from the **Device management > Devices** menu. For more information on creating an IoT Edge device through the Azure portal, read [Create and provision an IoT Edge device on Linux using symmetric keys](how-to-provision-single-device-linux-symmetric.md).
- * To create an IoT Edge device with the CLI follow the steps in the quickstart for [Linux](quickstart-linux.md#register-an-iot-edge-device) or [Windows](quickstart.md#register-an-iot-edge-device). In the process of registering an IoT Edge device, you create an IoT Edge device.
+ * To create an IoT Edge device with the CLI, follow the steps in the quickstart for [Linux](quickstart-linux.md#register-an-iot-edge-device) or [Windows](quickstart.md#register-an-iot-edge-device). In the process of registering an IoT Edge device, you create an IoT Edge device.
If you're running the IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you start development in Visual Studio. ## Create an Azure IoT Edge project
-The IoT Edge project template in Visual Studio creates a solution to deploy to IoT Edge devices. First, you'll create an Azure IoT Edge solution. Then, you'll create a module in that solution. Each IoT Edge solution can contain more than one module.
+The IoT Edge project template in Visual Studio creates a solution to deploy to IoT Edge devices. First, you create an Azure IoT Edge solution. Then, you create a module in that solution. Each IoT Edge solution can contain more than one module.
-In our solution, we're going to build three projects. The main module that contains *EdgeAgent* and *EdgeHub*, in addition to the temperature sensor module. Next, you'll add two more IoT Edge modules.
+In our solution, we're going to build three projects. The main module that contains *EdgeAgent* and *EdgeHub*, in addition to the temperature sensor module. Next, you add two more IoT Edge modules.
-> [!TIP]
-> The IoT Edge project structure created by Visual Studio is not the same as the one in Visual Studio Code.
+> [!IMPORTANT]
+> The IoT Edge project structure created by Visual Studio isn't the same as the one in Visual Studio Code.
+>
+> Currently, the Azure IoT Edge Dev Tool CLI doesn't support creating the Visual Studio project type. You need to use the Visual Studio IoT Edge extension to create the Visual Studio project.
1. In Visual Studio, create a new project.
The module project folder contains a file for your module code named either `Pro
### Deployment manifest of your project
-The deployment manifest you'll edit is named `deployment.debug.template.json`. This file is a template of an IoT Edge deployment manifest that defines all the modules that run on a device along with how they communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md).
+The deployment manifest you edit is named `deployment.debug.template.json`. This file is a template of an IoT Edge deployment manifest that defines all the modules that run on a device along with how they communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md).
If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
Currently, the latest stable runtime version is 1.4. You should update the IoT E
1. In the Solution Explorer, right-click the name of your main project and select **Set IoT Edge runtime version**.
- :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Screenshot of how to find and select the menu item named 'Set I o T Edge Runtime version'.":::
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Screenshot of how to find and select the menu item named 'Set IoT Edge Runtime version'.":::
1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes. If no change was made, select **Cancel** to exit. Currently, the extension doesn't include a selection for the latest runtime versions. If you want to set the runtime version higher than 1.2, open *deployment.debug.template.json* deployment manifest file. Change the runtime version for the system runtime module images *edgeAgent* and *edgeHub*. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file: ```json
- ...
"systemModules": { "edgeAgent": {
- ...
- "image": "mcr.microsoft.com/azureiotedge-agent:1.4",
- ...
+ //...
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.4"
+ //...
"edgeHub": {
- ...
+ //...
"image": "mcr.microsoft.com/azureiotedge-hub:1.4",
- ...
+ //...
```
-1. If you changed the version, regenerate your deployment manifest by right-clicking the name of your project and select **Generate deployment for IoT Edge**. This generates a deployment manifest based on your deployment template and will appear in the **config** folder of your Visual Studio project.
+1. If you changed the version, regenerate your deployment manifest by right-clicking the name of your project and select **Generate deployment for IoT Edge**. This generates a deployment manifest based on your deployment template and appears in the **config** folder of your Visual Studio project.
::: zone-end
Currently, the latest stable runtime version is 1.4. You should update the IoT E
1. Change the runtime version for the system runtime module images *edgeAgent* and *edgeHub*. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file: ```json
- ...
"systemModules": { "edgeAgent": {
- ...
+ //...
"image": "mcr.microsoft.com/azureiotedge-agent:1.4",
- ...
+ //...
"edgeHub": {
- ...
+ //...
"image": "mcr.microsoft.com/azureiotedge-hub:1.4",
- ...
+ //...
``` ::: zone-end
To initialize the tool in Visual Studio:
### Build and debug a single module
-Typically, you'll want to test and debug each module before running it within an entire solution with multiple modules. The IoT Edge simulator tool allows you to run a single module in isolation a send messages over port 53000.
+Typically, you want to test and debug each module before running it within an entire solution with multiple modules. The IoT Edge simulator tool allows you to run a single module in isolation a send messages over port 53000.
1. In **Solution Explorer**, select and highlight the module project folder (for example, *IotEdgeModule1*). Set the custom module as the startup project. Select **Project** > **Set as StartUp Project** from the menu.
Typically, you'll want to test and debug each module before running it within an
### Build and debug multiple modules
-After you're done developing a single module, you might want to run and debug an entire solution with multiple modules. The IoT Edge simulator tool allows you to run all modules defined in the deployment manifest including a simulated edgeHub for message routing. In this example, you'll run two custom modules and the simulated temperature sensor module. Messages from the simulated temperature sensor module are routed to each custom module.
+After you're done developing a single module, you might want to run and debug an entire solution with multiple modules. The IoT Edge simulator tool allows you to run all modules defined in the deployment manifest including a simulated edgeHub for message routing. In this example, you run two custom modules and the simulated temperature sensor module. Messages from the simulated temperature sensor module are routed to each custom module.
1. In **Solution Explorer**, add a second module to the solution by right-clicking the main project folder. On the menu, select **Add** > **New IoT Edge Module**.
- :::image type="content" source="./media/how-to-visual-studio-develop-module/add-new-module.png" alt-text="Screenshot of how to add a 'New I o T Edge Module' from the menu." lightbox="./media/how-to-visual-studio-develop-module/add-new-module.png":::
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/add-new-module.png" alt-text="Screenshot of how to add a 'New IoT Edge Module' from the menu." lightbox="./media/how-to-visual-studio-develop-module/add-new-module.png":::
1. In the `Add module` window give your new module a name and replace the `localhost:5000` portion of the repository URL with your Azure Container Registry login server, like you did before.
After you're done developing a single module, you might want to run and debug an
"sensorTo<NewModuleName>": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/<NewModuleName>/inputs/input1\")" ```
-1. Right-click the main project (for example, *AzureIotEdgeApp1*) and select **Set as StartUp Project**. By setting the main project as the startup project, all modules in the solution run. This includes both modules you added to the solution as well as the simulated temperature sensor module and the simulated Edge hub.
+1. Right-click the main project (for example, *AzureIotEdgeApp1*) and select **Set as StartUp Project**. By setting the main project as the startup project, all modules in the solution run. This includes both modules you added to the solution, the simulated temperature sensor module, and the simulated Edge hub.
1. Press **F5** or select the run toolbar button to run the solution. It may take 10 to 20 seconds initially. Be sure you don't have other Docker containers running that might bind the port you need for this project.
Once you've developed and debugged your module, you can build and push the modul
**Add credentials to your `.env` file:**
- In **Solution Explorer**, select the **Show All Files** toolbar button. The `.env` file will appear. Add your Azure Container Registry username and password to your `.env` file. These credentials can be found on the **Access Keys** page of your Azure Container Registry in the Azure portal.
+ In **Solution Explorer**, select the **Show All Files** toolbar button. The `.env` file appears. Add your Azure Container Registry username and password to your `.env` file. These credentials can be found on the **Access Keys** page of your Azure Container Registry in the Azure portal.
- :::image type="content" source="./media/how-to-visual-studio-develop-module/show-env-file.png" alt-text="Screenshot of button that will show all files in the Solution Explorer.":::
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/show-env-file.png" alt-text="Screenshot of button that shows all files in the Solution Explorer.":::
```env DEFAULT_RT_IMAGE=1.2
Now that you've built and pushed your module images to your Azure Container Regi
:::image type="content" source="./media/how-to-visual-studio-develop-module/generate-deployment.png" alt-text="Screenshot of location of the 'generate deployment' menu item.":::
-1. Go to your local Visual Studio main project folder and look in the `config` folder. The file path might look like this: `C:\Users\<YOUR-USER-NAME>\source\repos\<YOUR-IOT-EDGE-PROJECT-NAME>\config`. Here you'll find the generated deployment manifest such as `deployment.amd64.debug.json`.
+1. Go to your local Visual Studio main project folder and look in the `config` folder. The file path might look like this: `C:\Users\<YOUR-USER-NAME>\source\repos\<YOUR-IOT-EDGE-PROJECT-NAME>\config`. Here you find the generated deployment manifest such as `deployment.amd64.debug.json`.
1. Check your `deployment.amd64.debug.json` file to confirm the `edgeHub` schema version is set to 1.2.
Now that you've built and pushed your module images to your Azure Container Regi
> [!IMPORTANT] > Once your IoT Edge device is deployed, it currently won't display correctly in the Azure portal with schema version 1.2 (version 1.1 will be fine). This is a known bug and will be fixed soon. However, this won't affect your device, as it's still connected in IoT Hub and can be communicated with at any time using the Azure CLI. >
- >:::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the I o T Edge device page.":::
+ >:::image type="content" source="./media/how-to-publish-subscribe/unsupported-1.2-schema.png" alt-text="Screenshot of Azure portal error on the IoT Edge device page.":::
1. Now let's deploy our manifest with an Azure CLI command. Open the Visual Studio **Developer Command Prompt** and change to the **config** directory.
docker push myacr.azurecr.io/iotedgemodule1:0.0.1-amd64
In Visual Studio, open *deployment.debug.template.json* deployment manifest file in the main project. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials, your module images, and the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md).
-1. If you're using an Azure Container Registry to store your module image, you'll need to add your credentials to **deployment.debug.template.json** in the *edgeAgent* settings. For example,
+1. If you're using an Azure Container Registry to store your module image, you need to add your credentials to **deployment.debug.template.json** in the *edgeAgent* settings. For example,
```json "modulesContent": {
In Visual Studio, open *deployment.debug.template.json* deployment manifest file
} } },
- ...
+ //...
``` 1. Replace the *image* property value with the module image name you pushed to the registry. For example, if you pushed an image tagged `myacr.azurecr.io/iotedgemodule1:0.0.1-amd64` for custom module *IotEdgeModule1*, replace the image property value with the tag value.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Previously updated : 06/21/2023 Last updated : 07/13/2023
[!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
-Azure IoT Edge can make your IoT solution more efficient by moving workloads out of the cloud and to the edge. This capability lends itself well to services that process a lot of data, like computer vision models. The [Custom Vision Service](../cognitive-services/custom-vision-service/overview.md) lets you build custom image classifiers and deploy them to devices as containers. Together, these two services enable you to find insights from images or video streams without having to transfer all of the data off site first. Custom Vision provides a classifier that compares an image against a trained model to generate insights.
+Azure IoT Edge can make your IoT solution more efficient by moving workloads out of the cloud and to the edge. This capability lends itself well to services that process large amounts of data, like computer vision models. The [Custom Vision Service](../cognitive-services/custom-vision-service/overview.md) lets you build custom image classifiers and deploy them to devices as containers. Together, these two services enable you to find insights from images or video streams without having to transfer all of the data off site first. Custom Vision provides a classifier that compares an image against a trained model to generate insights.
For example, Custom Vision on an IoT Edge device could determine whether a highway is experiencing higher or lower traffic than normal, or whether a parking garage has available parking spots in a row. These insights can be shared with another service to take action.
Once your image classifier is built and trained, you can export it as a Docker c
### Upload images and train your classifier
-Creating an image classifier requires a set of training images, as well as test images.
+Creating an image classifier requires a set of training images and test images.
1. Clone or download sample images from the [Cognitive-CustomVision-Windows](https://github.com/Microsoft/Cognitive-CustomVision-Windows) repo onto your local development machine.
Creating an image classifier requires a set of training images, as well as test
![Export as DockerFile with Linux containers](./media/tutorial-deploy-custom-vision/export-2.png)
-5. When the export is complete, select **Download** and save the .zip package locally on your computer. Extract all files from the package. You'll use these files to create an IoT Edge module that contains the image classification server.
+5. When the export is complete, select **Download** and save the .zip package locally on your computer. Extract all files from the package. You use these files to create an IoT Edge module that contains the image classification server.
When you reach this point, you've finished creating and training your Custom Vision project. You'll use the exported files in the next section, but you're done with the Custom Vision web page.
A solution is a logical way of developing and organizing multiple modules for a
| Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. | | Provide a solution name | Enter a descriptive name for your solution, like **CustomVisionSolution**, or accept the default. | | Select module template | Choose **Python Module**. |
- | Provide a module name | Name your module **classifier**.<br><br>It's important that this module name be lowercase. IoT Edge is case-sensitive when referring to modules, and this solution uses a library that formats all requests in lowercase. |
+ | Provide a module name | Name your module **classifier**.<br><br>It's important that this module name is lowercase. IoT Edge is case-sensitive when referring to modules, and this solution uses a library that formats all requests in lowercase. |
| Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal.<br><br>The final string looks like **\<registry name\>.azurecr.io/classifier**. | ![Provide Docker image repository](./media/tutorial-deploy-custom-vision/repository.png)
The Visual Studio Code window loads your IoT Edge solution workspace.
The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
+The IoT Edge extension tries to pull your container registry credentials from Azure and populates them in the environment file. Check to see if your credentials are already included. If not, add them now:
1. In the Visual Studio Code explorer, open the .env file. 2. Update the fields with the **username** and **password** values that you copied from your Azure container registry.
The IoT Edge extension tries to pull your container registry credentials from Az
### Select your target architecture
-Currently, Visual Studio Code can develop modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64, which is what we'll use for this tutorial.
+Currently, Visual Studio Code can develop modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64, which is what we use for this tutorial.
1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
In this section, you add a new module to the same CustomVisionSolution and provi
print ( "Error: Image path or image-processing endpoint missing" ) ```
-4. Save the **main.py** file.
+4. Save the **main.py** file.
-5. Open the **requrements.txt** file.
+5. Open the **requirements.txt** file.
6. Add a new line for a library to include in the container.
Make sure that your IoT Edge device is up and running.
2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-3. Select the **deployment.amd64.json** file in the **config** folder and then **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
+3. Select the **deployment.amd64.json** file in the **config** folder and then **Select Edge Deployment Manifest**. Don't use the deployment.template.json file.
4. Under your device, expand **Modules** to see a list of deployed and running modules. Select the refresh button. You should see the new **classifier** and **cameraCapture** modules running along with the **$edgeAgent** and **$edgeHub**.
There are two ways to view the results of your modules, either on the device its
From your device, view the logs of the cameraCapture module to see the messages being sent and the confirmation that they were received by IoT Hub.
- ```bash
- iotedge logs cameraCapture
- ```
-
-From Visual Studio Code, right-click on the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
+```bash
+iotedge logs cameraCapture
+```
+
+For example, you should see output like the following:
+
+```Output
+admin@vm:~$ iotedge logs cameraCapture
+Simulated camera module for Azure IoT Edge. Press Ctrl-C to exit.
+The sample is now sending images for processing and will indefinitely.
+Response from classification service: (200) {"created": "2023-07-13T17:38:42.940878", "id": "", "iteration": "", "predictions": [{"boundingBox": null, "probability": 1.0, "tagId": "", "tagName": "hemlock"}], "project": ""}
+
+Total images sent: 1
+Response from classification service: (200) {"created": "2023-07-13T17:38:53.444884", "id": "", "iteration": "", "predictions": [{"boundingBox": null, "probability": 1.0, "tagId": "", "tagName": "hemlock"}], "project": ""}
+```
+
+You can also view messages from Visual Studio Code. Right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
+
+```Output
+[IoTHubMonitor] [2:43:36 PM] Message received from [vision-device/cameraCapture]:
+{
+ "created": "2023-07-13T21:43:35.697782",
+ "id": "",
+ "iteration": "",
+ "predictions": [
+ {
+ "boundingBox": null,
+ "probability": 1,
+ "tagId": "",
+ "tagName": "hemlock"
+ }
+ ],
+ "project": ""
+}
+```
> [!NOTE] > Initially, you may see connection errors in the output from the cameraCapture module. This is due to the delay between modules being deployed and starting. >
-> The cameraCapture module automatically reattempts connection until successful. After successful connection, you will see the expected image classification messages described below.
+> The cameraCapture module automatically reattempts connection until successful. After successful connection, you see the expected image classification messages.
-The results from the Custom Vision module, which are sent as messages from the cameraCapture module, include the probability that the image is of either a hemlock or cherry tree. Since the image is hemlock, you should see the probability as 1.0.
+The results from the Custom Vision module that are sent as messages from the cameraCapture module, include the probability that the image is of either a hemlock or cherry tree. Since the image is hemlock, you should see the probability as 1.0.
## Clean up resources
Otherwise, you can delete the local configurations and the Azure resources that
In this tutorial, you trained a Custom Vision model and deployed it as a module onto an IoT Edge device. Then you built a module that can query the image classification service and report its results back to IoT Hub.
-Continue on to the next tutorials to learn about other ways that Azure IoT Edge can help you turn data into business insights at the edge.
+Continue to the next tutorials to learn about other ways that Azure IoT Edge can help you turn data into business insights at the edge.
> [!div class="nextstepaction"] > [Store data at the edge with SQL Server databases](tutorial-store-data-sql-server.md)
iot Iot Mqtt Connect To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-hub.md
To learn more, see [Tutorial - Use MQTT to develop an IoT device client](../iot-
To use the MQTT protocol directly, your client *must* connect over TLS/SSL. Attempts to skip this step fail with connection errors.
-In order to establish a TLS connection, you may need to download and reference the DigiCert Baltimore Root Certificate. This certificate is the one that Azure uses to secure the connection. You can find this certificate in the [Azure-iot-sdk-c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) repository. More information about these certificates can be found on [Digicert's website](https://www.digicert.com/digicert-root-certificates.htm).
+In order to establish a TLS connection, you may need to download and reference the DigiCert root certificate that Azure uses. Between February 15 and October 15, 2023, Azure IoT Hub is migrating its TLS root certificate from the DigiCert Baltimore Root Certificate to the DigiCert Global Root G2. During the migration period, you should have both certificates on your devices to ensure connectivity. For more information about the migration, see [Migrate IoT resources to a new TLS certificate root](../iot-hub/migrate-tls-certificate.md) For more information about these certificates, see [Digicert's website](https://www.digicert.com/digicert-root-certificates.htm).
The following example demonstrates how to implement this configuration, by using the Python version of the [Paho MQTT library](https://pypi.python.org/pypi/paho-mqtt) by the Eclipse Foundation.
pip install paho-mqtt
Then, implement the client in a Python script. Replace these placeholders in the following code snippet:
-* `<local path to digicert.cer>` is the path to a local file that contains the DigiCert Baltimore Root certificate. You can create this file by copying the certificate information from [certs.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) in the Azure IoT SDK for C. Include the lines `--BEGIN CERTIFICATE--` and `--END CERTIFICATE--`, remove the `"` marks at the beginning and end of every line, and remove the `\r\n` characters at the end of every line.
+* `<local path to digicert.cer>` is the path to a local file that contains the DigiCert root certificate. You can create this file by copying the certificate information from [certs.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) in the Azure IoT SDK for C. Include the lines `--BEGIN CERTIFICATE--` and `--END CERTIFICATE--`, remove the `"` marks at the beginning and end of every line, and remove the `\r\n` characters at the end of every line.
* `<device id from device registry>` is the ID of a device you added to your IoT hub.
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Add your existing load balancer deployments to a cross-region load balancer for
This region doesn't affect how the traffic is routed. If a home region goes down, traffic flow is unaffected. ### Home regions
-* East US 2
-* West US
-* Southeast Asia
* Central US
-* North Europe
* East Asia
-* US Gov Virginia
+* East US 2
+* North Europe
+* Southeast Asia
* UK South
+* US Gov Virginia
* West Europe
+* West US
> [!NOTE] > You can only deploy your cross-region load balancer or Public IP in Global tier in one of the listed Home regions.
Cross-region load balancer routes the traffic to the appropriate regional load b
:::image type="content" source="./media/cross-region-overview/multiple-region-global-traffic.png" alt-text="Diagram of multiple region global traffic."::: ### Participating regions
-* East US
-* West Europe
+* Australia East
+* Australia Southeast
+* Central India
* Central US
+* East Asia
+* East US
* East US 2
-* West US
+* Japan East
+* North Central US
* North Europe * South Central US
-* West US 2
-* UK South
* Southeast Asia
-* North Central US
-* Japan East
-* East Asia
-* West Central US
-* Australia Southeast
-* Australia East
-* Central India
+* UK South
* US DoD Central * US DoD East * US Gov Arizona * US Gov Texas * US Gov Virginia
+* West Central US
+* West Europe
+* West US
+* West US 2
> [!NOTE] > The backend regional load balancers can be deployed in any publicly available Azure Region and is not limited to just participating regions.
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Azure has a growing ecosystem of partners offering their network appliances for
**Tobias Kunze - Co-founder & CEO**
-[Learn more](https://glasnostic.com/blog/announcing-glasnostic-for-azure-gwlb)
- ### Palo Alto Networks :::image type="content" source="./media/gateway-partners/paloalto.png" alt-text="Screenshot of Palo Alto Networks logo.":::
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 02/07/2023 Last updated : 07/14/2023 # Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps
-Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the **Logic App (Consumption)** resource type or the **Logic App (Standard)** resource type. The Consumption resource type runs in the *multi-tenant* Azure Logic Apps or *integration service environment*, while the Standard resource type runs in *single-tenant* Azure Logic Apps environment.
+Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. When you create a logic app resource, you select either the **Consumption** workflow type or **Standard** workflow type. A Consumption logic app can have only one workflow that runs in *multi-tenant* Azure Logic Apps or an *integration service environment*. A Standard logic app can have one or multiple workflows that run in *single-tenant* Azure Logic Apps or an App Service Environment (ASE).
-Before you choose which resource type to use, review this article to learn how the resources types and service environments compare to each other. You can then decide the type that's best for your scenario's needs, solution requirements, and the environment where you want to deploy and run your workflows.
+Before you choose which logic app resource to create, review the following guide to learn how the logic app workflow types and service environments compare with each other. You can then make a better choice about which logic app workflow and environment best suits your scenario, solution requirements, and the destination where you want to deploy and run your workflows.
-If you're new to Azure Logic Apps, review the following documentation:
-
-* [What is Azure Logic Apps?](logic-apps-overview.md)
-* [What is a *logic app workflow*?](logic-apps-overview.md#logic-app-concepts)
+If you're new to Azure Logic Apps, review [What is Azure Logic Apps?](logic-apps-overview.md) and [What is a *logic app workflow*?](logic-apps-overview.md#logic-app-concepts).
<a name="resource-environment-differences"></a>
-## Resource types and environments
-
-To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
+## Logic app workflow types and environments
-The following table briefly summarizes differences between the **Logic App (Standard)** resource type and the **Logic App (Consumption)** resource type. You also learn how the *single-tenant* environment differs from the *multi-tenant* environment and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table summarizes the differences between a Consumption logic app workflow and Standard logic app workflow. You also learn how the *single-tenant* environment differs from the *multi-tenant* environment and *integration service environment (ISE)* for deploying, hosting, and running your workflows.
<a name="resource-type-introduction"></a>
-## Logic App (Standard) resource
+## Standard logic app and workflow
-The **Logic App (Standard)** resource type is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem. For example, you can create, deploy, and run single-tenant based logic apps and their workflows in [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md).
+The **Standard** logic app and workflow is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and the Azure App Service ecosystem. For example, you can create, deploy, and run single-tenant based logic apps and their workflows in [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md).
-The Standard resource type introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
+The Standard logic app introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Consumption** logic app resource where you have a 1-to-1 mapping between the logic app resource and a workflow.
-To learn more about portability, flexibility, and performance improvements, continue with the following sections. Or, for more information about the single-tenant Azure Logic Apps runtime and Azure Functions extensibility, review the following documentation:
+To learn more about portability, flexibility, and performance improvements, continue reviewing the following sections. For more information about the single-tenant Azure Logic Apps runtime and Azure Functions extensibility, review the following documentation:
* [Azure Logic Apps Running Anywhere - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564) * [Introduction to Azure Functions](../azure-functions/functions-overview.md)
To learn more about portability, flexibility, and performance improvements, cont
### Portability and flexibility
-When you create logic apps using the **Logic App (Standard)** resource type, you can deploy and run your workflows in other environments, such as [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, [create single-tenant based logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
+When you create a **Standard** logic app and workflow, you can deploy and run your workflow in other environments, such as [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflow in your development environment without having to deploy to Azure. If your scenario requires containers, you can [create single tenant logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, see [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
-These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
+These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. The multi-tenant model for automating **Consumption** logic app resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
-With the **Logic App (Standard)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the single-tenant Azure Logic Apps runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
+With the **Standard** logic app resource, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the single-tenant Azure Logic Apps runtime and your workflows together as part of your logic app resource or project. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
To deploy your app, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. That way, you can deploy using your own chosen tools, no matter the technology stack that you use for development.
By using standard build and deploy options, you can focus on app development sep
### Performance
-Using the **Logic App (Standard)** resource type, you can create and run multiple workflows in the same single logic app resource and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
+With a **Standard** logic app, you can create and run multiple workflows in the same single logic app resource and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
-The **Logic App (Standard)** resource type and single-tenant Azure Logic Apps runtime provide another significant improvement by making the more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+The **Standard** logic app resource and single-tenant Azure Logic Apps runtime provide another significant improvement by making the more popular managed connectors available as built-in connector operations. For example, you can use built-in connector operations for Azure Service Bus, Azure Event Hubs, SQL Server, and others. Meanwhile, the managed connector versions are still available and continue to work.
-When you use the new built-in operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the single-tenant Azure Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+When you use the new built-in connector operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the single-tenant Azure Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
<a name="data-residency"></a> ### Data residency
-Logic app resources created with the **Logic App (Standard)** resource type are hosted in single-tenant Azure Logic Apps, which [doesn't store, process, or replicate data outside the region where you deploy these logic app resources](https://azure.microsoft.com/global-infrastructure/data-residency), meaning data in your logic app workflows stay in the same region where you create and deploy their parent resources.
+**Standard** logic app resources are hosted in single-tenant Azure Logic Apps, which [doesn't store, process, or replicate data outside the region where you deploy these logic app resources](https://azure.microsoft.com/global-infrastructure/data-residency), meaning that data in your workflows stay in the same region where you create and deploy their parent resources.
### Direct access to resources in Azure virtual networks
-Workflows running in either [Azure Logic Apps (Standard)](single-tenant-overview-compare.md) or an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) can access secured resources such as virtual machines (VMs), other services, and systems that exist in an [Azure virtual network](../virtual-network/virtual-networks-overview.md). Both Azure Logic Apps (Standard) and an ISE are dedicated instances of the Azure Logic Apps service that use dedicated resources and run separately from the global multi-tenant Azure Logic Apps service.
+Workflows that run in either single-tenant Azure Logic Apps or in an *integration service environment* (ISE) can directly access secured resources such as virtual machines (VMs), other services, and systems that exist in an [Azure virtual network](../virtual-network/virtual-networks-overview.md).
-Running logic apps in your own dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors).
+Both single-tenant Azure Logic Apps and an ISE are dedicated instances of the Azure Logic Apps service, use dedicated resources, and run separately from multi-tenant Azure Logic Apps. Running workflows in a dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors).
-Azure Logic Apps (Standard) and an ISE also provide the following benefits:
+Single-tenant Azure Logic Apps and an ISE also provide the following benefits:
-* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multi-tenant service. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
+* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multi-tenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
* Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). ## Create, build, and deploy options
-To create a logic app based on the environment that you want, you have multiple options, for example:
+To create a logic app resource based on the environment that you want, you have multiple options, for example:
**Single-tenant environment** | Option | Resources and tools | More information | |--|||
-| Azure portal | **Logic App (Standard)** resource type | [Create an example Standard logic app workflow in single-tenant Azure Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) |
+| Azure portal | **Standard** logic app | [Create an example Standard logic app workflow in single-tenant Azure Logic Apps - Azure portal](create-single-tenant-workflows-azure-portal.md) |
| Visual Studio Code | [**Azure Logic Apps (Standard)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create an example Standard logic app workflow in single-tenant Azure Logic Apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) | | Azure CLI | Logic Apps Azure CLI extension | [az logicapp](/cli/azure/logicapp) | | Azure Resource Manager | - [Local](https://github.com/Azure/logicapps/tree/master/azure-devops-sample#local) <br>- [DevOps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample#devops) | [Single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample) | | Azure Arc-enabled Logic Apps | [Azure Arc-enabled Logic Apps sample](https://github.com/Azure/logicapps/tree/master/arc-enabled-logic-app-sample) | - [What is Azure Arc-enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) <br><br>- [Create and deploy single-tenant based logic app workflows with Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md) |
-||||
+| Azure REST API | [Azure App Service REST API](/rest/api/appservice/workflows)* <br><br>**Note**: The Standard logic app REST API is included with the Azure App Service REST API. | [Get started with Azure REST API reference](/rest/api/azure) |
**Multi-tenant environment** | Option | Resources and tools | More information | |--|||
-| Azure portal | **Logic App (Consumption)** resource type | [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md) |
+| Azure portal | **Consumption** logic app | [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md) |
| Visual Studio Code | [**Azure Logic Apps (Consumption)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-logicapps) | [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md)
-| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage Consumption logic app workflows in multi-tenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <p><p>- [az logic](/cli/azure/logic) |
+| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage Consumption logic app workflows in multi-tenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <br><br>- [az logic](/cli/azure/logic) |
| Azure Resource Manager | [**Create a logic app** ARM template](https://azure.microsoft.com/resources/templates/logic-app-create/) | [Quickstart: Create and deploy Consumption logic app workflows in multi-tenant Azure Logic Apps - ARM template](quickstart-create-deploy-azure-resource-manager-template.md) | | Azure PowerShell | [Az.LogicApp module](/powershell/module/az.logicapp) | [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) | | Azure REST API | [Azure Logic Apps REST API](/rest/api/logic) | [Get started with Azure REST API reference](/rest/api/azure) |
-||||
**Integration service environment** | Option | Resources and tools | More information | |--|||
-| Azure portal | **Logic App (Consumption)** resource type with an existing ISE resource | Same as [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md), but select an ISE, not a multi-tenant region. |
-||||
+| Azure portal | **Consumption** logic app deployed to an existing ISE resource | Same as [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md), but select an ISE, not a multi-tenant region. |
Although your development experiences differ based on whether you create **Consumption** or **Standard** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
-For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they're grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Standard)**.
+For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Standard** logic app resources. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but **Consumption** logic apps appear in the **Azure** window under the **Azure Logic Apps (Consumption)** extension, while **Standard** logic apps appear under the **Resources** section.
<a name="stateful-stateless"></a> ## Stateful and stateless workflows
-With the **Logic App (Standard)** resource type, you can create these workflow types within the same logic app:
+Within a **Standard** logic app, you can create the following workflow types:
* *Stateful*
With the **Logic App (Standard)** resource type, you can create these workflow t
| Supports chunking | No support for chunking | | Supports asynchronous operations | No support for asynchronous operations | | Edit default max run duration in host configuration | Best for workflows with max duration under 5 minutes |
-| Handles large messages | Best for handling small message sizes (under 64 KB) |
-|||
+| Handles large messages | Best for handling small message sizes (under 64 KB) |
</center>
With the **Logic App (Standard)** resource type, you can create these workflow t
### Nested behavior differences between stateful and stateless workflows
-You can [make a workflow callable](logic-apps-http-endpoint.md) from other workflows that exist in the same **Logic App (Standard)** resource by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
+You can [make a workflow callable](logic-apps-http-endpoint.md) from other workflows that exist in the same **Standard** logic app by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
The following list describes the behavior patterns that nested workflows can follow after a parent workflow calls a child workflow:
The following table identifies the child workflow's behavior based on whether th
| Stateful | Stateless | Trigger and wait | | Stateless | Stateful | Synchronous | | Stateless | Stateless | Trigger and wait |
-||||
<a name="other-capabilities"></a> ## Other single-tenant model capabilities
-The single-tenant model and **Logic App (Standard)** resource type include many current and new capabilities, for example:
+The single-tenant model and **Standard** logic app include many current and new capabilities, for example:
* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also informally known as [*service provider* connectors](../connectors/built-in.md#service-provider-interface-implementation). For a list, review [Built-in connectors in Consumption and Standard](../connectors/built-in.md#built-in-connectors).
+ * More managed connectors are now available as built-in connectors in Standard workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also informally known as [*service provider* connectors](../connectors/built-in.md#service-provider-interface-implementation). For a list, review [Built-in connectors in Consumption and Standard](../connectors/built-in.md#built-in-connectors).
* You can create your own custom built-in connectors for any service that you need by using the single-tenant Azure Logic Apps extensibility framework. Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/introduction.md#custom-connectors-and-apis), which aren't currently supported. For more information, review [Custom connector overview](custom-connector-overview.md#custom-connector-standard) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
The single-tenant model and **Logic App (Standard)** resource type include many
* Liquid: **Transform JSON To JSON**, **Transform JSON To TEXT**, **Transform XML To JSON**, and **Transform XML To Text** > [!NOTE]
- > To use these actions in single-tenant Azure Logic Apps (Standard), you need to have Liquid maps, XML maps, or XML schemas.
- > You can upload these artifacts in the Azure portal from your logic app's resource menu, under **Artifacts**, which includes
- > the **Schemas** and **Maps** sections. Or, you can add these artifacts to your Visual Studio Code project's **Artifacts**
- > folder using the respective **Maps** and **Schemas** folders. You can then use these artifacts across multiple workflows
- > within the *same logic app resource*.
+ > To use these actions in Standard workflows, you need to have Liquid maps, XML maps, or XML schemas.
+ > You can upload these artifacts in the Azure portal from your logic app's resource menu, under **Artifacts**,
+ > which includes the **Schemas** and **Maps** sections. Or, you can add these artifacts to your Visual Studio Code
+ > project's **Artifacts** folder using the respective **Maps** and **Schemas** folders. You can then use these
+ > artifacts across multiple workflows within the *same* logic app.
- * **Logic App (Standard)** resources can run anywhere because Azure Logic Apps generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. Azure Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
+ * **Standard** logic app workflows can run anywhere because Azure Logic Apps generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. Azure Logic Apps saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
- * The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time. However, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) currently don't support selecting user-assigned managed identities for authentication.
+ * **Standard** logic app workflows support enabling both the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) at the same time, although you can select only one identity to use at a time. While [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) support using the system-assigned identity, most currently don't support selecting user-assigned managed identities for authentication, except for SQL Server and the HTTP connectors.
> [!NOTE] > By default, the system-assigned identity is already enabled to authenticate connections at run time.
The single-tenant model and **Logic App (Standard)** resource type include many
* [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
-* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Standard)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
+* Regenerate access keys for managed connections used by individual workflows in a **Standard** logic app. For this task, [follow the same steps for a **Consumption** logic app but at the workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
<a name="built-connectors-standard"></a> ## Built-in connectors for Standard
-A Standard logic app workflow has many of the same built-in connectors as a Consumption logic app workflow, but not all. Vice versa, a Standard logic app workflow has many built-in connectors that aren't available in a Consumption logic app workflow.
+A **Standard** workflow can use many of the same built-in connectors as a Consumption workflow, but not all. Vice versa, a Standard workflow has many built-in connectors that aren't available in a Consumption workflow.
-For example, a Standard logic app workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption logic app workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
+For example, a Standard workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](../connectors/built-in.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md#built-in-connectors).
In single-tenant Azure Logic Apps, [built-in connectors with specific attributes
## Changed, limited, unavailable, or unsupported capabilities
-For the **Logic App (Standard)** resource, these capabilities have changed, or they're currently limited, unavailable, or unsupported:
+For the **Standard** logic app workflow, these capabilities have changed, or they're currently limited, unavailable, or unsupported:
-* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Batch, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear on the **Built-in** tab, while [managed connector triggers and actions](../connectors/managed.md) appear on the **Azure** tab.
+* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run using shared resources in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Batch, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear with the **In-App** label, while [managed connector triggers and actions](../connectors/managed.md) appear with the **Shared** label.
- For *stateless* workflows, *managed connector actions* are available, but *managed connector triggers* are unavailable. So the **Azure** tab appears only when you can select managed connector actions. Although you can enable managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
+ For *stateless* workflows, *managed connector actions* are available, but *managed connector triggers* are unavailable. Although you can enable managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
> [!NOTE] > To run locally in Visual Studio Code, webhook-based triggers and actions require additional setup. For more information, see > [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#webhook-setup).
- * These triggers and actions have either changed or are currently limited, unsupported, or unavailable:
+ * The following triggers and actions have either changed or are currently limited, unsupported, or unavailable:
* The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Function Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
For the **Logic App (Standard)** resource, these capabilities have changed, or t
* The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
- * Some [triggers and actions for integration accounts](../connectors/managed.md#integration-account-connectors) are unavailable, for example, the AS2 (V2) actions and RosettaNet actions.
- * The Gmail connector currently isn't supported. * [Custom managed connectors](../connectors/introduction.md#custom-connectors-and-apis) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
-* **Authentication**: The following authentication types are currently unavailable for the **Logic App (Standard)** resource type:
+* **Authentication**: The following authentication types are currently unavailable for **Standard** workflows:
* Azure Active Directory Open Authentication (Azure AD OAuth) for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
For the **Logic App (Standard)** resource, these capabilities have changed, or t
* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints).
-* **Trigger history and run history**: For the **Logic App (Standard)** resource type, trigger history and run history in the Azure portal appears at the workflow level, not the logic app level. For more information, review [Create single-tenant based workflows using the Azure portal](create-single-tenant-workflows-azure-portal.md).
+* **Trigger history and run history**: For a **Standard** logic app, trigger history and run history in the Azure portal appears at the workflow level, not the logic app resource level. For more information, review [Create single-tenant based workflows using the Azure portal](create-single-tenant-workflows-azure-portal.md).
* **Zoom control**: The zoom control is currently unavailable on the designer.
-* **Deployment targets**: You can't deploy the **Logic App (Standard)** resource type to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) nor to Azure deployment slots.
+* **Deployment targets**: You can't deploy a **Standard** logic app resource to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) nor to Azure deployment slots.
-* **Azure API Management**: You currently can't import the **Logic App (Standard)** resource type into Azure API Management. However, you can import the **Logic App (Consumption)** resource type.
+* **Azure API Management**: You currently can't import a **Standard** logic app resource into Azure API Management. However, you can import a **Consumption** logic app resource.
<a name="firewall-permissions"></a>
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Previously updated : 01/23/2023 Last updated : 07/13/2023 #Customer intent: As an experienced Python developer, I need secure access to my data in my Azure storage solutions, and I need to use that data to accomplish my machine learning tasks. # Data concepts in Azure Machine Learning
+With Azure Machine Learning, you can import data from a local machine or an existing cloud-based storage resource. This article describes key Azure Machine Learning data concepts.
-With Azure Machine Learning, you can bring data from a local machine or an existing cloud-based storage. In this article, you'll learn the main Azure Machine Learning data concepts.
+## Datastore
+
+An Azure Machine Learning datastore serves as a *reference* to an *existing* Azure storage account. An Azure Machine Learning datastore offers these benefits:
+
+- A common, easy-to-use API that interacts with different storage types (Blob/Files/ADLS).
+- Easier discovery of useful datastores in team operations.
+- For credential-based access (service principal/SAS/key), Azure Machine Learning datastore secures connection information. This way, you won't need to place that information in your scripts.
+
+When you create a datastore with an existing Azure storage account, you can choose between two different authentication methods:
+
+- **Credential-based** - authenticate data access with a service principal, shared access signature (SAS) token, or account key. Users with *Reader* workspace access can access the credentials.
+- **Identity-based** - use your Azure Active Directory identity or managed identity to authenticate data access.
+
+The following table summarizes the Azure cloud-based storage services that an Azure Machine Learning datastore can create. Additionally, the table summarizes the authentication types that can access those
+
+Supported storage service | Credential-based authentication | Identity-based authentication
+||:-:|::|
+Azure Blob Container| Γ£ô | Γ£ô|
+Azure File Share| Γ£ô | |
+Azure Data Lake Gen1 | Γ£ô | Γ£ô|
+Azure Data Lake Gen2| Γ£ô | Γ£ô|
+
+See [Create datastores](how-to-datastore.md) for more information about datastores.
+
+## Data types
+
+A URI (storage location) can reference a file, a folder, or a data table. A machine learning job input and output definition requires one of the following three data types:
+
+|Type |V2 API |V1 API |Canonical Scenarios | V2/V1 API Difference
+||||||
+|**File**<br>Reference a single file | `uri_file` | `FileDataset` | Read/write a single file - the file can have any format. | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. |
+|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. |
+|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, locally and on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. See [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. |
## URI A Uniform Resource Identifier (URI) represents a storage location on your local computer, Azure storage, or a publicly available http(s) location. These examples show URIs for different storage options: |Storage location | URI examples | |||
+|Azure Machine Learning [Datastore](#datastore) | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
|Local computer | `./home/username/data/my_data` | |Public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` | |Blob storage | `wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/`| |Azure Data Lake (gen2) | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` |
-| Azure Data Lake (gen1) | `adl://<accountname>.azuredatalakestore.net/<folder1>/<folder2>`
-|Azure Machine Learning [Datastore](#datastore) | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
+| Azure Data Lake (gen1) | `adl://<accountname>.azuredatalakestore.net/<folder1>/<folder2>` |
-An Azure Machine Learning job maps URIs to the compute target filesystem. This mapping means that in a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Azure Active Directory ID (default), or Managed Identity. Azure Machine Learning [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key) without exposure of secrets.
+An Azure Machine Learning job maps URIs to the compute target filesystem. This mapping means that in a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Azure Active Directory ID (default), or Managed Identity. Azure Machine Learning [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key), without exposure of secrets.
A URI can serve as either *input* or an *output* to an Azure Machine Learning job, and it can map to the compute target filesystem with one of four different *mode* options:
Job<br>Input or Output | `upload` | `download` | `ro_mount` | `rw_mount` | `dire
Input | | Γ£ô | Γ£ô | | Γ£ô | Output | Γ£ô | | | Γ£ô |
-Read [Access data in a job](how-to-read-write-data-v2.md) for more information.
+See [Access data in a job](how-to-read-write-data-v2.md) for more information.
-## Data types
-
-A URI (storage location) can reference a file, a folder, or a data table. A machine learning job input and output definition requires one of the following three data types:
+## Data runtime capability
+Azure Machine Learning uses its own *data runtime* for one of three purposes:
-|Type |V2 API |V1 API |Canonical Scenarios | V2/V1 API Difference
-||||||
-|**File**<br>Reference a single file | `uri_file` | `FileDataset` | Read/write a single file - the file can have any format. | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. |
-|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. |
-|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. |
+- for mounts/uploads/downloads
+- to map storage URIs to the compute target filesystem
+- to materialize tabular data into pandas/spark with Azure Machine Learning tables (`mltable`)
-## Data runtime capability
-Azure Machine Learning uses its own *data runtime* for mounts/uploads/downloads, to map storage URIs to the compute target filesystem, or to materialize tabular data into pandas/spark with Azure Machine Learning tables (`mltable`). The Azure Machine Learning data runtime is designed for machine learning task *high speed and high efficiency*. Its key benefits include:
+The Azure Machine Learning data runtime is designed for *high speed and high efficiency* of machine learning tasks. It offers these key benefits:
> [!div class="checklist"] > - [Rust](https://www.rust-lang.org/) language architecture. The Rust language is known for high speed and high memory efficiency.
Azure Machine Learning uses its own *data runtime* for mounts/uploads/downloads,
> - Data pre-fetches operate as background task on the CPU(s), to enhance utilization of the GPU(s) in deep-learning operations. > - Seamless authentication to cloud storage.
-## Datastore
-
-An Azure Machine Learning datastore serves as a *reference* to an *existing* Azure storage account. The benefits of Azure Machine Learning datastore creation and use include:
-
-1. A common, easy-to-use API that interacts with different storage types (Blob/Files/ADLS).
-1. Easier discovery of useful datastores in team operations.
-1. For credential-based access (service principal/SAS/key), Azure Machine Learning datastore secures connection information. This way, you won't need to place that information in your scripts.
-
-When you create a datastore with an existing Azure storage account, you can choose between two different authentication methods:
--- **Credential-based** - authenticate data access with a service principal, shared access signature (SAS) token, or account key. Users with *Reader* workspace access can access the credentials.-- **Identity-based** - use your Azure Active Directory identity or managed identity to authenticate data access.-
-The following table summarizes the Azure cloud-based storage services that an Azure Machine Learning datastore can create. Additionally, the table summarizes the authentication types that can access those
-
-Supported storage service | Credential-based authentication | Identity-based authentication
-||:-:|::|
-Azure Blob Container| Γ£ô | Γ£ô|
-Azure File Share| Γ£ô | |
-Azure Data Lake Gen1 | Γ£ô | Γ£ô|
-Azure Data Lake Gen2| Γ£ô | Γ£ô|
-
-Read [Create datastores](how-to-datastore.md) for more information about datastores.
- ## Data asset An Azure Machine Learning data asset resembles web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name. Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, or local files.
-Read [Create data assets](how-to-create-data-assets.md) for more information about data assets.
+See [Create data assets](how-to-create-data-assets.md) for more information about data assets.
## Next steps
+- [Access data in a job](how-to-read-write-data-v2.md)
- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2) - [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-data-assets.md#create-data-assets)-- [Access data in a job](how-to-read-write-data-v2.md) - [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
Use a YAML file to define the managed VNet configuration and add a private endpoint for the Azure Storage Account. Also set `spark_enabled: true`:
- > [!TIP]
- > This example is for a managed VNet configured to allow internet traffic. If you want to allow only approved outbound traffic, set `isolation_mode: allow_only_approved_outbound` instead.
+ > [!NOTE]
+ > This example is for a managed VNet configured to allow internet traffic. Currently, serverless Spark does not support `isolation_mode: allow_only_approved_outbound` to allow only approved outbound traffic.
```yml name: myworkspace
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`. It also adds a private endpoint for the Azure Storage Account and sets `spark_enabled=true`:
- > [!TIP]
- > The following example is for a managed VNet configured to allow internet traffic. If you want to allow only approved outbound traffic, use `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND` instead.
+ > [!NOTE]
+ > The following example is for a managed VNet configured to allow internet traffic. Currently, serverless Spark does not support `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND` to allow only approved outbound traffic.
```python # Get the existing workspace
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
You can view various metrics (request numbers, request latency, network bytes, C
For more information on how to view online endpoint metrics, see [Monitor online endpoints](../how-to-monitor-online-endpoints.md#metrics).
+## Troubleshoot endpoints deployed from prompt flow
+
+### Unable to fetch deployment schema
+
+After you deploy the endpoint and want to test it in the **Test tab** in the endpoint detail page, if the **Test tab** shows **Unable to fetch deployment schema** like following, you can try the following 2 methods to mitigate this issue:
++
+- Make sure you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](#grant-permissions-to-the-endpoint).
+- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime which was in old version as well. Update the runtime following [this guidance](./how-to-create-manage-runtime.md#update-runtime-from-ui) and re-run the flow in the latest runtime and then deploy the flow again.
+
+### Access denied to list workspace secret
+
+If you encounter error like "Access denied to list workspace secret", check whether you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](#grant-permissions-to-the-endpoint).
+ ## Clean up resources If you aren't going use the endpoint after completing this tutorial, you should delete the endpoint.
If you aren't going use the endpoint after completing this tutorial, you should
> The complete deletion may take approximately 20 minutes. + ## Next Steps - [Iterate and optimize your flow by tuning prompts using variants](how-to-tune-prompts-using-variants.md)
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md
The following table compares RTO and RPO in a **typical workload** scenario:
| **Capability** | **Basic** | **General Purpose** | **Memory optimized** | | :: | :-: | :--: | :: | | Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
-| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h |
+| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO > 24 h | RTO - Varies <br/>RPO > 24 h |
| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*| \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md
ms. Previously updated : 03/08/2023 Last updated : 07/14/2023
Dependency analysis identifies dependencies between discovered on-premises serve
There are two options for deploying dependency analysis **Option** | **Details** | **Public cloud** | **Azure Government**-- |- | -
+- |- | - |-
**Agentless** | Generally available for VMware VMs, Hyper-V VMs, bare-metal servers, and servers running on other public clouds like AWS, GCP etc. | Supported | Supported **Agent-based analysis** | Uses the [Service Map solution](/previous-versions/azure/azure-monitor/vm/service-map) in Azure Monitor, to enable dependency visualization and analysis.<br/><br/> You need to install agents on each on-premises server that you want to analyze. | Supported | Not supported.
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | |
-**Support** | Available for VMware VMs in general availability (GA).<br><br>Available for Hyper-V VMs and physical servers in public preview. | In general availability (GA).
+**Support** | Available for VMware VMs in general availability (GA). | In general availability (GA).
**Agent** | No agents needed on servers you want to analyze. | Agents required on each on-premises server that you want to analyze. **Log Analytics** | Not required. | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency analysis.<br/><br/> You associate a Log Analytics workspace with a project. The workspace must reside in the East US, Southeast Asia, or West Europe regions. The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). **Process** | Captures TCP connection data. After discovery, it gathers data at intervals of five minutes. | Service Map agents installed on a server gather data about TCP processes, and inbound/outbound connections for each process.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
ms. Previously updated : 07/11/2023 Last updated : 07/12/2023
The table summarizes agentless migration requirements for VMware vSphere VMs.
**Boot requirements** | **Windows VMs:**<br/>OS Drive (C:\\) and System Reserved Partition (EFI System Partition for UEFI VMs) should reside on the same disk.<br/>If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks. <br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. <br/><br/> **Linux VMs:**<br/> If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks.<br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **Disk size** | Up to 2-TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks. Changing the size of the source disk after initiating replication is supported and will not impact ongoing replication cycle.
-**Dynamic disk** | An OS disk as a dynamic disk is not supported. Ongoing replications need to be disabled and re-enabled after converting a dynamic OS disk to basic to start replication of the disk successfully.
-**Disk limits** | Up to 60 disks per VM.
+**Dynamic disk** | - An OS disk as a dynamic disk is not supported. <br/> - If a VM with OS disk as dynamic disk is replicating, convert the disk type from dynamic to basic and allow the new cycle to complete, before triggering test migration or migration. Note that you will need help from OS support for conversion of dynamic to basic disk type.
**Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported. **Independent disks** | Not supported.
migrate Tutorial App Containerization Aspnet App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md
Before you start this tutorial, you should:
**Requirement** | **Details** | **Identify a machine on which to install the tool** | You need a Windows machine on which to install and run the Azure Migrate App Containerization tool. The Windows machine could run a server (Windows Server 2016 or later) or client (Windows 10) operating system. (The tool can run on your desktop.) <br/><br/> The Windows machine running the tool should have network connectivity to the servers or virtual machines hosting the ASP.NET applications that you'll containerize.<br/><br/> Ensure that 6 GB is available on the Windows machine running the Azure Migrate App Containerization tool. This space is for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
-**Application servers** | Enable PowerShell remoting on the application servers: sign in to the application server and follow [these instructions to turn on PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting). <br/><br/> If the application server is running Window Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instructions [here to download and install PowerShell 5.1](/powershell/scripting/windows-powershell/wmf/setup/install-configure) on the application server. <br/><br/> If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
-**ASP.NET application** | The tool currently supports: <br> <ul><li> ASP.NET applications that use .NET Framework 3.5 or later.<br/> <li>Application servers that run Windows Server 2008 R2 or later. (Application servers should be running PowerShell 5.1.) <br/><li> Applications that run on Internet Information Services 7.5 or later.</ul> <br/><br/> The tool currently doesn't support: <br/> <ul><li>Applications that require Windows authentication. (AKS doesn't currently support gMSA.) <br/> <li> Applications that depend on other Windows services hosted outside of Internet Information Services.
+**Application servers** | Enable PowerShell remoting on the application servers: sign in to the application server and follow [these instructions to turn on PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting). <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instructions [here to download and install PowerShell 5.1](/powershell/scripting/windows-powershell/wmf/setup/install-configure) on the application server. <br/><br/> If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
+**ASP.NET application** | The tool currently supports: <br> <ul><li> ASP.NET applications that use .NET Framework 3.5 or later.<br/> <li>Application servers that run Windows Server 2012 R2 or later. (Application servers should be running PowerShell 5.1.) <br/><li> Applications that run on Internet Information Services 7.5 or later.</ul> <br/><br/> The tool currently doesn't support: <br/> <ul><li>Applications that require Windows authentication. (AKS doesn't currently support gMSA.) <br/> <li> Applications that depend on other Windows services hosted outside of Internet Information Services.
## Prepare an Azure user account
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
ms.
Previously updated : 04/24/2023 Last updated : 07/14/2023 # ASP.NET app containerization and migration to Azure Kubernetes Service
Before you begin this tutorial, you should:
**Requirement** | **Details** | **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
-**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Windows Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
-**ASP.NET application** | The tool currently supports:<br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later. <br/>- Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/>- Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support: <br/>- Applications requiring Windows authentication (The App Containerization tool currently doesn't support gMSA). <br/>- Applications that depend on other Windows services hosted outside IIS.
+**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
+**ASP.NET application** | The tool currently supports:<br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later. <br/>- Application servers running Windows Server 2012 R2 or later (application servers should be running PowerShell version 5.1). <br/>- Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support: <br/>- Applications requiring Windows authentication (The App Containerization tool currently doesn't support gMSA). <br/>- Applications that depend on other Windows services hosted outside IIS.
## Prepare an Azure user account
nat-gateway Tutorial Dual Stack Outbound Nat Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md
Create a resource group with [az group create](/cli/azure/group#az-group-create)
```azurecli-interactive az group create \
- --name TutorialIPv6NATLB-rg \
- --location westus2
+ --name test-rg \
+ --location eastus2
``` ### Create network and subnets
Use [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create) to
```azurecli-interactive az network vnet create \
- --resource-group TutorialIPv6NATLB-rg \
- --location westus2 \
- --name myVNet \
- --address-prefixes '10.1.0.0/16'
+ --resource-group test-rg \
+ --location eastus2 \
+ --name vnet-1 \
+ --address-prefixes '10.0.0.0/16'
``` Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create the IPv4 subnet for the virtual network and the Azure Bastion subnet. ```azurecli-interactive az network vnet subnet create \
- --name myBackendSubnet \
- --resource-group TutorialIPv6NATLB-rg \
- --vnet-name myVNet \
- --address-prefixes '10.1.0.0/24'
+ --name subnet-1 \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --address-prefixes '10.0.0.0/24'
``` ```azurecli-interactive az network vnet subnet create \ --name AzureBastionSubnet \
- --resource-group TutorialIPv6NATLB-rg \
- --vnet-name myVNet \
- --address-prefixes '10.1.1.0/26'
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --address-prefixes '10.0.1.0/26'
``` ### Create bastion host
Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public
```azurecli-interactive az network public-ip create \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-Bastion \
+ --resource-group test-rg \
+ --name public-ip \
--sku standard \ --zone 1 2 3 ```
Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_cr
```azurecli-interactive az network bastion create \
- --resource-group TutorialIPv6NATLB-rg \
- --name myBastion \
- --public-ip-address myPublicIP-Bastion \
- --vnet-name myVNet \
- --location westus2
+ --resource-group test-rg \
+ --name bastion \
+ --public-ip-address public-ip \
+ --vnet-name vnet-1 \
+ --location eastus2 \
+ --sku basic
```
Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public
```azurecli-interactive az network public-ip create \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-NAT \
+ --resource-group test-rg \
+ --name public-ip-nat \
--sku standard \ --zone 1 2 3 ```
Use [az network nat gateway create](/cli/azure/network/nat/gateway#az-network-na
```azurecli-interactive az network nat gateway create \
- --resource-group TutorialIPv6NATLB-rg \
- --name myNATgateway \
- --public-ip-addresses myPublicIP-NAT \
+ --resource-group test-rg \
+ --name nat-gateway \
+ --public-ip-addresses public-ip-nat \
--idle-timeout 4 ```
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update) to associate the NAT gateway with **myBackendSubnet**.
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update) to associate the NAT gateway with **subnet-1**.
```azurecli-interactive az network vnet subnet update \
- --resource-group TutorialIPv6NATLB-rg \
- --vnet-name myVNet \
- --name myBackendSubnet \
- --nat-gateway myNATgateway
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --name subnet-1 \
+ --nat-gateway nat-gateway
``` ## Add IPv6 to virtual network
-The addition of IPv6 to the virtual network must be done after the NAT gateway is associated with **myBackendSubnet**. Use the following example to add and IPv6 address space and subnet to the virtual network you created in the previous steps.
+The addition of IPv6 to the virtual network must be done after the NAT gateway is associated with **subnet-1**. Use the following example to add and IPv6 address space and subnet to the virtual network you created in the previous steps.
# [**Portal**](#tab/dual-stack-outbound-portal)
Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to
```azurecli-interactive az network vnet update \
- --address-prefixes 10.1.0.0/16 2404:f800:8000:122::/63 \
- --name myVNet \
- --resource-group TutorialIPv6NATLB-rg
+ --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
+ --name vnet-1 \
+ --resource-group test-rg
``` Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update) to add the IPv6 subnet to the virtual network. ```azurecli-interactive az network vnet subnet update \
- --address-prefixes 10.1.0.0/24 2404:f800:8000:122::/64 \
- --name myBackendSubnet \
- --vnet-name myVNet \
- --resource-group TutorialIPv6NATLB-rg
+ --address-prefixes 10.0.0.0/24 2404:f800:8000:122::/64 \
+ --name subnet-1 \
+ --vnet-name vnet-1 \
+ --resource-group test-rg
```
Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to cre
```azurecli-interactive az network nsg create \
- --name myNSG \
- --resource-group TutorialIPv6NATLB-rg
+ --name nsg-1 \
+ --resource-group test-rg
``` Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create a rule for RDP connectivity to the virtual machine. ```azurecli-interactive az network nsg rule create \
- --resource-group TutorialIPv6NATLB-rg \
- --nsg-name myNSG \
- --name myNSGRuleRDP \
+ --resource-group test-rg \
+ --nsg-name nsg-1 \
+ --name ssh-rule \
--protocol '*' \ --direction inbound \ --source-address-prefix '*' \ --source-port-range '*' \ --destination-address-prefix '*' \
- --destination-port-range 3389 \
+ --destination-port-range 22 \
--access allow \ --priority 200 ```
Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to cre
```azurecli-interactive az network nic create \
- --name myNIC \
- --resource-group TutorialIPv6NATLB-rg \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
+ --name nic-1 \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --subnet subnet-1 \
--private-ip-address-version IPv4 ```
Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az_networ
```azurecli-interactive az network nic ip-config create \
- --name ipconfig-IPv6 \
- --nic-name myNIC \
- --resource-group TutorialIPv6NATLB-rg \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
+ --name ipconfig-ipv6 \
+ --nic-name nic-1 \
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --subnet subnet-1 \
--private-ip-address-version IPv6 ```
Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
```azurecli-interactive az vm create \
- --name myVM \
- --resource-group TutorialIPv6NATLB-rg \
+ --resource-group test-rg \
+ --name vm-1 \
+ --image Ubuntu2204 \
--admin-username azureuser \
- --image Win2022Datacenter \
- --nics myNIC
- ```
+ --authentication-type password \
+ --nics nic-1
+```
+ ## Create public load balancer
Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public
```azurecli-interactive az network public-ip create \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-IPv6 \
+ --resource-group test-rg \
+ --name public-ip-ipv6 \
--sku standard \ --version IPv6 \ --zone 1 2 3
Use [az network lb create](/cli/azure/network/lb#az-network-lb-create) to create
```azurecli-interactive az network lb create \
- --name myLoadBalancer \
- --resource-group TutorialIPv6NATLB-rg \
- --backend-pool-name myBackendPool \
- --frontend-ip-name myFrontend-IPv6 \
- --location westus2 \
- --public-ip-address myPublicIP-IPv6 \
- --sku Standard
+ --name load-balancer \
+ --resource-group test-rg \
+ --backend-pool-name backend-pool \
+ --frontend-ip-name frontend-ipv6 \
+ --location eastus2 \
+ --public-ip-address public-ip-ipv6 \
+ --sku standard
``` Use [az network lb outbound-rule create](/cli/azure/network/lb/outbound-rule#az-network-lb-outbound-rule-create) to create the outbound rule for the backend pool of the load balancer. The outbound rule enables outbound connectivity for virtual machines in the backend pool of the load balancer. ```azurecli-interactive az network lb outbound-rule create \
- --address-pool myBackendPool \
- --frontend-ip-configs myFrontend-IPv6 \
- --lb-name myLoadBalancer \
- --name myOutBoundRule \
+ --address-pool backend-pool \
+ --frontend-ip-configs frontend-ipv6 \
+ --lb-name load-balancer \
+ --name outbound-rule \
--protocol All \
- --resource-group TutorialIPv6NATLB-rg \
+ --resource-group test-rg \
--outbound-ports 20000 \ --enable-tcp-reset true ```
Use [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config
```azurecli-interactive az network nic ip-config address-pool add \
- --address-pool myBackendPool \
- --ip-config-name ipconfig-IPv6 \
- --nic-name myNIC \
- --resource-group TutorialIPv6NATLB-rg \
- --lb-name myLoadBalancer
+ --address-pool backend-pool \
+ --ip-config-name ipconfig-ipv6 \
+ --nic-name nic-1 \
+ --resource-group test-rg \
+ --lb-name load-balancer
```
Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-i
```azurecli-interactive az network public-ip show \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-NAT \
+ --resource-group test-rg \
+ --name public-ip-nat \
--query ipAddress \ --output tsv ``` ```output azureuser@Azure:~$ az network public-ip show \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-NAT \
+ --resource-group test-rg \
+ --name public-ip-nat \
--query ipAddress \ --output tsv 40.90.217.214
azureuser@Azure:~$ az network public-ip show \
```azurecli-interactive az network public-ip show \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-IPv6 \
+ --resource-group test-rg \
+ --name public-ip-ipv6 \
--query ipAddress \ --output tsv ``` ```output azureuser@Azure:~$ az network public-ip show \
- --resource-group TutorialIPv6NATLB-rg \
- --name myPublicIP-IPv6 \
+ --resource-group test-rg \
+ --name public-ip-ipv6 \
--query ipAddress \ --output tsv 2603:1030:c04:3::4d
Make note of both IP addresses. Use the IPs to verify the outbound connectivity
1. Select **vm-1**.
-1. In the **Overview** of **myVM**, select **Connect** then **Bastion**. Select **Use Bastion**
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. Select **Use Bastion**
1. Enter the username and password you created when you created the virtual machine.
Make note of both IP addresses. Use the IPs to verify the outbound connectivity
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-1. Select **myVM**.
+1. Select **vm-1**.
-1. In the **Overview** of **myVM**, select **Connect** then **Bastion**.
+1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. Select **Use Bastion**
1. Enter the username and password you created when you created the virtual machine. 1. Select **Connect**.
-1. On the desktop of **myVM**, open **Microsoft Edge**.
-
-1. To confirm the IPv4 address, enter `http://v4.testmyipv6.com` in the address bar.
+1. At the command line, enter the following command to verify the IPv4 address.
-1. You should see the IPv4 address displayed. In this example, the IP of **40.90.217.214** displayed.
+ ```bash
+ curl -4 icanhazip.com
+ ```
- :::image type="content" source="./media/tutorial-dual-stack-outbound-nat-load-balancer/cli-verify-ipv4.png" alt-text="Screenshot of outbound IPv4 public IP address from CLI steps.":::
+ ```output
+ azureuser@vm-1:~$ curl -4 icanhazip.com
+ 40.90.217.214
+ ```
-1. In the address bar, enter `http://v6.testmyipv6.com`
+1. At the command line, enter the following command to verify the IPv4 address.
-1. You should see the IPv6 address displayed. In this example, the IP of **2603:1030:c04:3::4d** is displayed.
+ ```bash
+ curl -6 icanhazip.com
+ ```
- :::image type="content" source="./media/tutorial-dual-stack-outbound-nat-load-balancer/cli-verify-ipv6.png" alt-text="Screenshot of outbound IPv6 public IP address from CLI steps.":::
+ ```output
+ azureuser@vm-1:~$ curl -6 icanhazip.com
+ 2603:1030:c04:3::4d
+ ```
-1. Close the bastion connection to **myVM**.
+1. Close the bastion connection to **vm-1**.
Use [az group delete](/cli/azure/group#az-group-delete) to delete the resource g
```azurecli-interactive az group delete \
- --name TutorialIPv6NATLB-rg
+ --name test-rg
```
operator-nexus How To Route Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-route-policy.md
IP prefixes specify only the match conditions of route policies. They don't spec
This command creates an IP prefix resource with IPv4 prefix rules: ```azurecli
-az nf ipprefix create \
+az networkfabric ipprefix create \
--resource-group "ResourceGroupName" \ --resource-name "ipprefixv4-1204-cn1" \ --location "eastus" \
Expected output:
This command creates an IP prefix resource with IPv6 prefix rules, ```azurecli
-az nf ipprefix create \
+az networkfabric ipprefix create \
--resource-group "ResourceGroupName" \ --resource-name "ipprefixv6-2701-cn1" \ --location "eastus" \
IP community resource allows operators to manipulate routes based on Community v
This command creates an IP community resource: ```azurecli
-az nf ipcommunity create \
+az networkfabric ipcommunity create \
--resource-group "ResourceGroupName" \ --resource-name "ipcommunity-2701" \ --location "eastus" \
Expected output:
This command displays an IP community resource: ```azurecli
-az nf ipcommunity show --resource-group "ResourceGroupName" --resource-name "ipcommunity-2701"
+az networkfabric ipcommunity show --resource-group "ResourceGroupName" --resource-name "ipcommunity-2701"
```
The `IPExtendedCommunity`resource allows operators to manipulate routes based o
This command creates an IP extended community resource: ```azurecli
-az nf ipextendedcommunity create \
+az networkfabric ipextendedcommunity create \
--resource-group "ResourceGroupName" \ --resource-name "ipextcommunity-2701" \ --location "eastus" \
Expected output:
This command displays an IP extended community resource: ```azurecli
-az nf ipextendedcommunity show --resource-group "ResourceGroupName" --resource-name "ipextcommunity-2701"
+az networkfabric ipextendedcommunity show --resource-group "ResourceGroupName" --resource-name "ipextcommunity-2701"
``` Expected output:
Route policy resource enables an operator to specify conditions and actions base
This command creates route policies: ```azurecli
-az nf routepolicy create \
+az networkfabric routepolicy create \
--resource-group "ResourceGroupName" \ --resource-name "rcf-Fab3-l3domain-v6-connsubnet-ext-policy" \ --location "eastus" \
Expected output:
This command displays route policies: ```Azurecli
-az nf routepolicy show --resource-group "ResourceGroupName" --resource-name "rcf-Fab3-l3domain-v6-connsubnet-ext-policy"
+az networkfabric routepolicy show --resource-group "ResourceGroupName" --resource-name "rcf-Fab3-l3domain-v6-connsubnet-ext-policy"
``` Expected output:
operator-nexus Howto Azure Operator Nexus Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-azure-operator-nexus-prerequisites.md
In subsequent deployments of Operator Nexus, you can skip to creating the NFC an
- Microsoft.ResourceConnector - Microsoft.Resources
-## Dependant Azure resources setup
+## Dependent Azure resources setup
- Establish [ExpressRoute](/azure/expressroute/expressroute-introduction) connectivity from your on-premises network to an Azure Region: - ExpressRoute circuit [creation and verification](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager)
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
For Azure Operator Nexus instances, isolation domains enable communication betwe
1. Ensure that a network fabric controller (NFC) and a network fabric have been created. 1. Install the latest version of the
-[Azure CLI extension for managed network fabrics](./howto-install-cli-extensions.md).
+[Azure CLI extension for managed network fabric](./howto-install-cli-extensions.md).
1. Use the following command to sign in to your Azure account and set the subscription to your Azure subscription ID. This should be the same subscription ID that you use for all the resources in an Azure Operator Nexus instance. ```azurecli
The following parameters are available for configuring isolation domains.
Use the following commands to create an L2 isolation domain: ```azurecli
-az nf l2domain create \
+az networkfabric l2domain create \
--resource-group "ResourceGroupName" \ --resource-name "example-l2domain" \ --location "eastus" \
Expected output:
This command shows details about L2 isolation domains, including their administrative states: ```azurecli
-az nf l2domain show --resource-group "ResourceGroupName" --resource-name "example-l2domain"
+az networkfabric l2domain show --resource-group "ResourceGroupName" --resource-name "example-l2domain"
``` Expected output:
Expected output:
This command lists all L2 isolation domains available in a resource group: ```azurecli
-az nf l2domain list --resource-group "ResourceGroupName"
+az networkfabric l2domain list --resource-group "ResourceGroupName"
``` Expected output:
Expected output:
You must enable an isolation domain to push the configuration to the network fabric devices. Use the following command to change the administrative state of an isolation domain: ```azurecli
-az nf l2domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l2domain" --state Enable/Disable
+az networkfabric l2domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l2domain" --state Enable/Disable
``` Expected output:
Expected output:
Use this command to delete an L2 isolation domain: ```azurecli
-az nf l2domain delete --resource-group "ResourceGroupName" --resource-name "example-l2domain"
+az networkfabric l2domain delete --resource-group "ResourceGroupName" --resource-name "example-l2domain"
``` Expected output:
The following parameters for isolation domains are optional.
Use this command to create an L3 isolation domain: ```azurecli
-az nf l3domain create
+az networkfabric l3domain create
--resource-group "ResourceGroupName" --resource-name "example-l3domain" --location "eastus"
Expected output:
#### Create an untrusted L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` #### Create a trusted L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` #### Create a management L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` ### Show L3 isolation domains
az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3mg
This command shows details about L3 isolation domains, including their administrative states: ```azurecli
-az nf l3domain show --resource-group "ResourceGroupName" --resource-name "example-l3domain"
+az networkfabric l3domain show --resource-group "ResourceGroupName" --resource-name "example-l3domain"
``` Expected output:
Expected output:
Use this command to get a list of all L3 isolation domains available in a resource group: ```azurecli
-az nf l3domain list --resource-group "ResourceGroupName"
+az networkfabric l3domain list --resource-group "ResourceGroupName"
``` Expected output:
Expected output:
Use the following command to change the administrative state of an L3 isolation domain to enabled or disabled: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l3domain" --state Enable/Disable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l3domain" --state Enable/Disable
``` Expected output:
Use the `az show` command to verify whether the administrative state has changed
Use this command to delete an L3 isolation domain: ```azurecli
- az nf l3domain delete --resource-group "ResourceGroupName" --resource-name "example-l3domain"
+ az networkfabric l3domain delete --resource-group "ResourceGroupName" --resource-name "example-l3domain"
``` Use the `show` or `list` command to validate that the isolation domain has been deleted.
The following parameters are optional for creating internal networks.
You need to create an internal network before you enable an L3 isolation domain. This command creates an internal network with BGP configuration and a specified peering address: ```azurecli
-az nf internalnetwork create
+az networkfabric internalnetwork create
--resource-group "ResourceGroupName" --l3-isolation-domain-name "example-l3domain" --resource-name "example-internalnetwork"
Expected output:
### Create an untrusted internal network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
``` ### Create a trusted internal network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
``` ### Create an internal management network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
``` ### Create multiple static routes with a single next hop ```azurecli
-az nf internalnetwork create
+az networkfabric internalnetwork create
--resource-name "example-internalnetwork" --l3domain "example-l3domain" --resource-group "ResourceGroupName"
Expected output:
### Create an internal network by using IPv6 ```azurecli
-az nf internalnetwork create
+az networkfabric internalnetwork create
--resource-group "ResourceGroupName" --l3-isolation-domain-name "example-l3domain" --resource-name "example-internalipv6network"
For Option A, you need to create an external network before you enable the L3 is
### Create an external network by using Option B ```azurecli
-az nf externalnetwork create
+az networkfabric externalnetwork create
--resource-group "ResourceGroupName" --l3domain "examplel3domain" --resource-name "examplel3-externalnetwork"
Expected output:
### Create an external network by using Option A ```azurecli
-az nf externalnetwork create
+az networkfabric externalnetwork create
--resource-group "ResourceGroupName" --l3domain "example-l3domain" --resource-name "example-externalipv4network"
Expected output:
### Create an external network by using IPv6 ```azurecli
-az nf externalnetwork create
+az networkfabric externalnetwork create
--resource-group "ResourceGroupName" --l3-isolation-domain-name "example-l3domain" --resource-name "example-externalipv6network"
Expected output:
## Enable an L2 isolation domain ```azurecli
-az nf l2domain update-administrative-state --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --state Enable
+az networkfabric l2domain update-administrative-state --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --state Enable
``` ## Enable an L3 isolation domain
az nf l2domain update-administrative-state --resource-group "ResourceGroupName"
Use this command to enable an untrusted L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3untrust" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3untrust" --state Enable
``` Use this command to enable a trusted L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3trust" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3trust" --state Enable
``` Use this command to enable a management L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable
```
operator-nexus Howto Configure Network Fabric Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric-controller.md
az group create -n NFCResourceGroupName -l "East US"
Here's an example of how you can create an NFC by using the Azure CLI: ```azurecli
-az nf controller create \
+az networkfabric controller create \
--resource-group "NFCResourceGroupName" \ --location "eastus" \ --resource-name "nfcname" \
NFC creation takes 30 to 45 minutes. Use the `show` command to monitor the progr
## Get a network fabric controller ```azurecli
- az nf controller show --resource-group "NFCResourceGroupName" --resource-name "nfcname"
+ az networkfabric controller show --resource-group "NFCResourceGroupName" --resource-name "nfcname"
``` Expected output:
Expected output:
You should delete an NFC only after deleting all associated network fabrics. Use this command to delete an NFC: ```azurecli
- az nf controller delete --resource-group "NFCResourceGroupName" --resource-name "nfcname"
+ az networkfabric controller delete --resource-group "NFCResourceGroupName" --resource-name "nfcname"
``` Expected output:
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Run the following command to create the network fabric. The rack count is either
```azurecli
-az nf fabric create \
+az networkfabric fabric create \
--resource-group "NFResourceGroupName" --location "eastus" \ --resource-name "NFName" \
Expected output:
### Show network fabrics ```azurecli
-az nf fabric show --resource-group "NFResourceGroupName" --resource-name "NFName"
+az networkfabric fabric show --resource-group "NFResourceGroupName" --resource-name "NFName"
``` Expected output:
Expected output:
### List all network fabrics in a resource group ```azurecli
-az nf fabric list --resource-group "NFResourceGroup"
+az networkfabric fabric list --resource-group "NFResourceGroup"
``` Expected output:
Run the following command to create the NNI:
```azurecli
-az nf nni create \
+az networkfabric nni create \
--resource-group "NFResourceGroup" \ --location "eastus" \ --resource-name "NFNNIName" \
Expected output:
### Show network fabric NNIs ```azurecli
-az nf nni show -g "NFResourceGroup" --resource-name "NFNNIName" --fabric "NFFabric"
+az networkfabric nni show -g "NFResourceGroup" --resource-name "NFNNIName" --fabric "NFFabric"
```
Expected output:
### List or get network fabric NNIs ```azurecli
-az nf nni list -g NFResourceGroup --fabric NFFabric
+az networkfabric nni list -g NFResourceGroup --fabric NFFabric
``` Expected output:
Run the following command to update network fabric devices:
```azurecli
-az nf device update \
+az networkfabric device update \
--resource-group "NFResourceGroup" \ --resource-name "Network-Device-Name" \ --location "eastus" \
For example, `AggrRack` consists of:
Run the following command to list network fabric devices in a resource group: ```azurecli
-az nf device list --resource-group "NFResourceGroup"
+az networkfabric device list --resource-group "NFResourceGroup"
``` Expected output:
Expected output:
Run the following command to get or show details of a network fabric device: ```azurecli
-az nf device show --resource-group "NFResourceGroup" --resource-name "Network-Device-Name"
+az networkfabric device show --resource-group "NFResourceGroup" --resource-name "Network-Device-Name"
``` Expected output:
Expected output:
After you update the device serial number, provision and show the fabric by running the following commands: ```azurecli
-az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
+az networkfabric fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
``` ```azurecli
-az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+az networkfabric fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
``` Expected output:
Expected output:
To deprovision a fabric, ensure that the fabric is in a provisioned operational state and then run this command: ```azurecli
-az nf fabric deprovision --resource-group "NFResourceGroup" --resource-name "NFName"
+az networkfabric fabric deprovision --resource-group "NFResourceGroup" --resource-name "NFName"
```
To delete a fabric, run the following command. Before you do, make sure that:
* No racks are associated with the fabric. ```azurecli
-az nf fabric delete --resource-group "NFResourceGroup" --resource-name "NFName"
+az networkfabric fabric delete --resource-group "NFResourceGroup" --resource-name "NFName"
```
Expected output:
After you successfully delete the network fabric, when you run the command to show the fabric, you won't find any resources available: ```azurecli
-az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+az networkfabric fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
``` Expected output: ```output
-Command group 'nf' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-(ResourceNotFound) The Resource 'Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName' under resource group 'NFResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
+The Resource 'Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName' under resource group 'NFResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
Code: ResourceNotFound ```
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
If you haven't already installed Azure CLI: [Install Azure CLI][installation-ins
az extension remove --name managednetworkfabric ``` -- Download the `managednetworkfabric` python wheel-
-# [Linux / macOS / WSL](#tab/linux+macos+wsl)
-
-```sh
- curl -L "https://aka.ms/nexus-nf-cli" --output "managednetworkfabric-0.0.0-py3-none-any.whl"
-```
-
-# [PowerShell](#tab/powershell)
-
-```ps
- curl "https://aka.ms/nexus-nf-cli" -OutFile "managednetworkfabric-0.0.0-py3-none-any.whl"
-```
--- - Install and test the `managednetworkfabric` CLI extension ```azurecli
- az extension add --source managednetworkfabric-0.0.0-py3-none-any.whl
- az nf --help
+ az extension add --name managednetworkfabric
+ az networkfabric --help
``` ## Install AKS-Hybrid (`hybridaks`) CLI extension
operator-nexus Howto Run Instance Readiness Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md
+
+ Title: "Azure Operator Nexus: How to run Instance Readiness Testing"
+description: Learn how to run instance readiness testing.
++++ Last updated : 07/13/2023+++
+# Instance readiness testing
+
+Instance Readiness Testing (IRT) is a framework built to orchestrate real-world workloads for testing of the Azure Operator Nexus Platform.
+
+## Environment requirements
+
+- A Linux environment (Ubuntu suggested) capable of calling Azure APIs
+- Knowledge of networks to use for the test
+ * Networks to use for the test are specified in a "networks-blueprint.yml" file, see [Input Configuration](#input-configuration).
+- curl or wget to download IRT package
+
+## Before execution
+
+1. From your Linux environment, download nexus-irt.tar.gz from aka.ms/nexus-irt `curl -Lo nexus-irt.tar.gz aka.ms/nexus-irt`.
+1. Extract the tarball to the local file system: `mkdir -p irt && tar xf nexus-irt.tar.gz --directory ./irt`.
+1. Switch to the new directory `cd irt`.
+1. The `setup.sh` script is provided to aid in the initial set up of an environment.
+ * `setup.sh` assumes a nonroot user and attempts to use `sudo`, which installs:
+ 1. `jq` version 1.6
+ 1. `yq` version 4.33
+ 1. `azcopy` version 10
+ 1. `az` Azure CLI minimum version not known, stay up to date.
+ 1. `elinks` for viewing html files on the command line
+ 1. `tree` for viewing directory structures
+ 1. `moreutils` utilities for viewing progress from the ACI container
+1. [Optional] Set up a storage account to archive test results over time. For help, see the [instructions](#uploading-results-to-your-own-archive).
+1. Log into Azure, if not already logged in: `az login --use-device`.
+ * User should have `Contributor` role
+1. Create an Azure Managed Identity for the container to use.
+ * Using the provided script: `MI_RESOURCE_GROUP="<your resource group> MI_NAME="<managed identity name>" SUBSCRIPTION="<subscription>" ./create-managed-identity.sh`
+ * Can be created manually via the Azure portal, refer to the script for needed permissions
+1. Create a service principal and security group. The service principal is used as the executor of the test. The group informs the kubernetes cluster of valid users. The service principal must be a part of the security group, so it has the ability to log into the cluster.
+ * You can provide your own, or use our provided script, here's an example of how it could be executed; `AAD_GROUP_NAME=external-test-aad-group-8 SERVICE_PRINCIPAL_NAME=external-test-sp-8 ./irt/create-service-principal.sh`.
+ * This script prints four key/value pairs for you to include in your input file.
+1. If necessary, create the isolation domains required to execute the tests. They aren't lifecycled as part of this test scenario.
+ * **Note:** If deploying isolation domains, your network blueprint must define at least one external network per isolation domain. see `networks-blueprint.example.yml` for help with configuring your network blueprint.
+ * `create-l3-isolation-domains.sh` takes one parameter, a path to your networks blueprint file; here's an example of the script being invoked:
+ * `create-l3-isolation-domains.sh ./networks-blueprint.yml`
+
+### Input configuration
+
+1. Build your input file. The IRT tarball provides `irt-input.example.yml` as an example. These values **will not work for all instances**, they need to be manually changed and the file also needs to be renamed to `irt-input.yml`.
+1. define the values of networks-blueprint input, an example of this file is given in networks-blueprint.example.yml.
+
+The network blueprint input schema for IRT is defined in the networks-blueprint.example.yml. Currently IRT has the following network requirements. The networks are created as part of the test, provide network details that aren't in use.
+
+1. Three (3) L3 Networks
+
+ * Two of them with MTU 1500
+ * One of them with MTU 9000 and shouldn't have fabric_asn definition
+
+1. One (1) Trunked Network
+1. All vlans should be greater than 500
+
+## Execution
+
+1. Execute: `./irt.sh irt-input.yml`
+ * Assumes irt-input.yml is in the same location as irt.sh. If in a different location provides the full file path.
+
+## Results
+
+1. A file named `summary-<cluster_name>-<timestamp>.html` is downloaded at the end of the run and contains the testing results. It can be viewed:
+ 1. From any browser
+ 1. Using elinks or lynx to view from the command line; for example:
+ 1. `elinks summary-<cluster_name>-<timestamp>..html`
+ 1. When an SAS Token is provided for the `PUBLISH_RESULTS_TO` parameter the results are uploaded to the blob container you specified. It can be previewed by navigating to the link presented to you at the end of the IRT run.
+
+### Uploading results to your own archive
+
+1. We offer a supplementary script, `create-archive-storage.sh` to allow you to set up a storage container to store your results. The script generates an SAS Token for a storage container that is valid for three days. The script creates a storage container, storage account, and resource group if they don't already exist.
+ 1. The script expects the following environment variables to be defined:
+ 1. RESOURCE_GROUP
+ 1. SUBSCRIPTION
+ 1. STORAGE_ACCOUNT_NAME
+ 1. STORAGE_CONTAINER_NAME
+1. Copy the last output from the script, into your IRT YAML input. The output looks like this:
+ * `PUBLISH_RESULTS_TO="<sas-token>"`
operator-nexus Reference Customer Edge Provider Edge Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-customer-edge-provider-edge-connectivity.md
You can use standard BGP (option A). You can also use MP-BGP with inter-as Optio
For MP-BGP make sure you configure matching route targets on both PE and CE. ```azurecli
-az nf fabric create \
+az networkfabric fabric create \
--resource-group "example-rg" \ --location "eastus" \ --resource-name "example-nf" \
operator-nexus Troubleshoot Aks Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-aks-hybrid-cluster.md
At a high level, the steps to create isolation domains are:
1. Enable the L3 isolation domain by using the following command: ~~~bash
- az nf l3domain update-admin-state --resource-group "RESOURCE_GROUP_NAME" --resource-name "L3ISOLATIONDOMAIN_NAME" --state "Enable"
+ az networkfabric l3domain update-admin-state --resource-group "RESOURCE_GROUP_NAME" --resource-name "L3ISOLATIONDOMAIN_NAME" --state "Enable"
~~~ It's important to check that the fabric resources achieve an `administrativeState` value of `Enabled`, and that the `provisioningState` value is `Succeeded`. If the `update-admin-state` step is skipped or unsuccessful, the networks can't operate. You can use `show` commands to check the values. For example: ~~~bash
-az nf l3domain show -g "example-rg" --resource-name "l2domainname" -o table
-az nf l2domain show -g "example-rg" --resource-name "l3domainname" -o table
+az networkfabric l3domain show -g "example-rg" --resource-name "l2domainname" -o table
+az networkfabric l2domain show -g "example-rg" --resource-name "l3domainname" -o table
~~~ ### Network cloud network status is Failed
operator-service-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/overview.md
+
+ Title: About Azure Operator Service Manager
+description: Learn about Azure Operator Service Manager, an Azure Service for the management of Network Services for telecom operators.
++ Last updated : 04/09/2023+++
+# About Azure Operator Service Manager
+
+Azure Operator Service Manager is an Azure service designed to assist telecom operators in managing their network services. It provides streamlined management capabilities for intricate, multi-part, multi-vendor applications across numerous hybrid cloud sites, encompassing Azure regions, edge platforms, and Arc-connected sites. Initially, Azure Operator Service Manager caters to the needs of telecom operators who are in the process of migrating their workloads to Azure and Arc-connected cloud environments.
+
+Azure Operator Service Manager expands and improves the Network Function Manager by incorporating technology and ideas from Azure for Operators' on-premises management tools. Its purpose is to manage the convergence of comprehensive, multi-vendor service solutions on a per-site basis. It uses a declarative software and configuration model for the system. It also combines Azure's hyperscaler experience and tooling for error-free Safe Deployment Practices (SDP) across sites grouped in canary tiers.
+
+## Product features
+
+Azure Operator Service Manager provides an Azure-native abstraction for modeling and realizing a distributed network service using extra resource types in Azure Resource Manager (ARM) through our cloud service. A network service is represented as a network graph comprising multiple network functions, with appropriate policies controlling the data plane to meet each telecom operator's operational needs. Creation of templates of configuration schemas allows for per-site variation that is often required in such deployments.
+
+The service is partitioned into a global control plane, which operates on Azure, and site control planes. Site control planes also function on Azure, but are confined to specific sites, such as on-premises and hybrid sites.
+
+The global control plane hosts the interfaces for publishers, designers, and operators. All of the applicable resources are immutable versioned objects, replicated to all Azure regions. The global control plane also hosts the Safe Deployment Practices (SDP) Global Convergence Agent, which is responsible for driving rollout across sites.
+
+The site control plane consists of the Site Convergence Agent. The Site Convergence Agent is responsible for mapping the desired state of a site. The desired state ranges from the network service level down to the network function and cloud resource level. The Site Convergence Agent converges each site to the desired state, and runs in an Azure region as a resource provider in that region.
+
+## Benefits
+
+Azure Operator Service Manager provides the following benefits:
+
+- Provides a single management experience for all Azure for operators solutions in Azure or connected clouds.
+- Offers SDK and PowerShell services to further extend the reach to include third-party network functions and network services.
+- Implements consistent best-practice safe deployment practices (SDP) fleet-wide.
+- Provides blast-radius limitations and disconnected mode support to enable five-nines operation of these services.
+- Offers clear dashboard reporting of convergence state for each site and canary level.
+- Enables real telecom DevOps working, eliminating the need for NF-specific maintenance windows.
+
+## Get access to Azure Operator Service Manager
+
+Azure Operator Service Manager is currently in public preview. To get started, contact us at [aosmpartner@microsoft.com](mailto:aosmpartner@microsoft.com?subject=Azure%20Operator%20Service%20Manager%20preview%20request&Body=Hello%2C%0A%0AI%20would%20like%20to%20request%20access%20to%20the%20Azure%20Operator%20Service%20Manager%20preview%20documentation.%0A%0AMy%20GitHub%20username%20is%3A%20%0A%0AThank%20you%21), provide your GitHub username, and request access to our preview documentation.
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
You can get playbook templates from the following sources:
When a new version of the template is published, the active playbooks created from that template show up in the **Active playbooks** tab displaying a label indicating that an update is available. -- Playbook templates are available as part of product solutions or standalone content that you install from the content hub in Microsoft Sentinel. For more information, see [Microsoft Sentinel content and solutions](sentinel-solutions.md) and [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+- Playbook templates are available as part of product solutions or standalone content that you install from the **Content hub** page in Microsoft Sentinel. For more information, see [Microsoft Sentinel content and solutions](sentinel-solutions.md) and [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
- The [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) contains many playbook templates. They can be deployed to an Azure subscription by selecting the **Deploy to Azure** button.
The following recommended playbooks, and other similar playbooks are available t
- **Notification playbooks** are triggered when an alert or incident is created and send a notification to a configured destination:
- - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Teams)
- - [Send an Outlook email notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification)
- - [Post a message in a Slack channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Slack)
+ | Playbook | Folder in<br>GitHub&nbsp;repository | Solution in Content hub/<br>Azure Marketplace |
+ | -- | -- | |
+ | **Post a message in a Microsoft Teams channel** | [Post-Message-Teams](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Teams) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
+ | **Send an Outlook email notification** | [Send-basic-email](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Send-basic-email) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
+ | **Post a message in a Slack channel** | [Post-Message-Slack](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Slack) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
+ | **Send Microsoft Teams adaptive card on incident creation** | [Send-Teams-adaptive-card-on-incident-creation](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Send-Teams-adaptive-card-on-incident-creation) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
- **Blocking playbooks** are triggered when an alert or incident is created, gather entity information like the account, IP address, and host, and blocks them from further actions:
- - [Prompt to block an IP address](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-IPs-on-MDATP-Using-GraphSecurity).
- - [Block an Azure AD user](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-AADUserOrAdmin)
- - [Reset an Azure AD user password](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Azure%20Active%20Directory/Playbooks/Reset-AADUserPassword)
- - [Prompt to isolate a machine](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Isolate-AzureVMtoNSG)
+ | Playbook | Folder in<br>GitHub&nbsp;repository | Solution in Content&nbsp;hub/<br>Azure Marketplace |
+ | -- | -- | -- |
+ | **Block an IP address in Azure Firewall** | [AzureFirewall-BlockIP-addNewRule](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Azure%20Firewall/Playbooks/AzureFirewall-BlockIP-addNewRule) | [Azure Firewall Solution for Sentinel](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/sentinel4azurefirewall.sentinel4azurefirewall?tab=Overview) |
+ | **Block an Azure AD user** | [Block-AADUser](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Azure%20Active%20Directory/Playbooks/Block-AADUser) | [Azure Active Directory solution](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/azuresentinel.azure-sentinel-solution-azureactivedirectory?tab=Overview) |
+ | **Reset an Azure AD user password** | [Reset-AADUserPassword](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Azure%20Active%20Directory/Playbooks/Reset-AADUserPassword) | [Azure Active Directory solution](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/azuresentinel.azure-sentinel-solution-azureactivedirectory?tab=Overview) |
+ | **Isolate or unisolate device using<br>Microsoft Defender for Endpoint** | [Isolate-MDEMachine](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/MicrosoftDefenderForEndpoint/Playbooks/Isolate-MDEMachine)<br>[Unisolate-MDEMachine](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/MicrosoftDefenderForEndpoint/Playbooks/Unisolate-MDEMachine) | [Microsoft Defender for Endpoint solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-microsoftdefenderendpoint?tab=Overview) |
- **Create, update, or close playbooks** can create, update, or close incidents in Microsoft Sentinel, Microsoft 365 security services, or other ticketing systems:
- - [Change an incident's severity](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Change-Incident-Severity)
- - [Create a ServiceNow incident](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Servicenow/Playbooks/Create-SNOW-record)
+ | Playbook | Folder in<br>GitHub&nbsp;repository | Solution in Content hub/<br>Azure Marketplace |
+ | -- | -- | |
+ | **Create an incident using Microsoft Forms** | [CreateIncident-MicrosoftForms](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/CreateIncident-MicrosoftForms) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
+ | **Relate alerts to incidents** | [relateAlertsToIncident-basedOnIP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/relateAlertsToIncident-basedOnIP) | [Sentinel SOAR Essentials solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sentinelsoaressentials?tab=Overview) |
+ | **Create a ServiceNow incident** | [Create-SNOW-record](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Servicenow/Playbooks/Create-SNOW-record) | [ServiceNow solution](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/azuresentinel.azure-sentinel-solution-servicenow?tab=Overview) |
## Next steps
service-fabric Service Fabric Cluster Upgrade Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-os.md
Last updated 07/14/2022
This document describes how to migrate your Azure Service Fabric for Linux cluster from Ubuntu version 18.04 LTS to 20.04 LTS. Each operating system (OS) version requires a different Service Fabric runtime package. This article describes the steps required to facilitate a smooth migration to the newer version.
+> [!NOTE]
+> U18.04 reached end-of-life in June 2023. Starting with the 10.0CU1 release, Service Fabric runtime will discontinue support for U18.04. Service Fabric will no longer provide updates or patches at that time.
+ ## Approach to migration The general approach to the migration follows these steps:
spring-apps How To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-domain.md
Certificates encrypt web traffic. These TLS/SSL certificates can be stored in Az
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`-- An application deployed to Azure Spring Apps (see [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md), or use an existing app).
+- An application deployed to Azure Spring Apps (see [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md), or use an existing app). If your application is deployed using the Basic plan, be sure to upgrade to the Standard plan.
- A domain name with access to the DNS registry for a domain provider, such as GoDaddy. - A private certificate (that is, your self-signed certificate) from a third-party provider. The certificate must match the domain. - A deployed instance of Azure Key Vault. For more information, see [About Azure Key Vault](../key-vault/general/overview.md).
Use the following steps to upload your certificate to key vault:
1. Go to your key vault instance. 1. In the navigation pane, select **Certificates**. 1. On the upper menu, select **Generate/import**.
-1. In the **Create a certificate** dialog under **Method of certificate creation**, select `Import`.
+1. On the **Create a certificate** page, select **Import** for **Method of Certificate Creation**, and then provide a value for **Certificate Name**.
1. Under **Upload Certificate File**, navigate to certificate location and select it. 1. Under **Password**, if you're uploading a password protected certificate file, provide that password here. Otherwise, leave it blank. Once the certificate file is successfully imported, key vault removes that password. 1. Select **Create**.
Use the following command to import a certificate:
```azurecli az keyvault certificate import \
- --file <path-to-pfx-file> \
+ --file <path-to-pfx-or-pem-file> \
--name <certificate-name> \ --vault-name <key-vault-name> \
- --password <export-password>
+ --password <password-if-needed>
```
az keyvault set-policy \
:::image type="content" source="./media/how-to-custom-domain/import-certificate.png" alt-text="Screenshot of the Azure portal showing the TLS/SSL settings page for an Azure Spring Apps instance, with the Import key vault certificate button highlighted." lightbox="./media/how-to-custom-domain/import-certificate.png":::
+1. On the **Select certificate from Azure** page, select the **Subscription**, **Key Vault**, and **Certificate** from the drop-down options, and then choose **Select**.
+
+ :::image type="content" source="./media/how-to-custom-domain/select-certificate-from-key-vault.png" alt-text="Screenshot of the Azure portal showing the Select certificate from Azure page." lightbox="./media/how-to-custom-domain/select-certificate-from-key-vault.png":::
+
+1. On the opened **Set certificate name** page, enter your certificate name, and then select **Apply**.
+ 1. When you have successfully imported your certificate, it displays in the list of **Private Key Certificates**. :::image type="content" source="./media/how-to-custom-domain/key-certificates.png" alt-text="Screenshot of a private key certificate.":::
You can use a CNAME record to map a custom DNS name to Azure Spring Apps.
Go to your DNS provider and add a CNAME record to map your domain to the `<service-name>.azuremicroservices.io`. Here, `<service-name>` is the name of your Azure Spring Apps instance. We support wildcard domain and sub domain. After you add the CNAME, the DNS records page resembles the following example: ## Map your custom domain to Azure Spring Apps app
-If you don't have an application in Azure Spring Apps, follow the instructions in [Quickstart: Launch an existing application in Azure Spring Apps using the Azure portal](./quickstart.md).
+If you don't have an application in Azure Spring Apps, follow the instructions in [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
#### [Azure portal](#tab/Azure-portal)
Use the following command to show the list of custom domains:
```azurecli az spring app custom-domain list \
- --resource-group <resource-group-name>
- --service <Azure-Spring-Apps-instance-name>
- --app <app-name> \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --app <app-name>
```
Use the following command to update a custom domain of the app.
```azurecli az spring app custom-domain update \
- --resource-group <resource-group-name>
- --service <service-name>
+ --resource-group <resource-group-name> \
+ --service <service-name> \
--domain-name <domain-name> \ --certificate <cert-name> \
- --app <app-name> \
+ --app <app-name>
```
By default, anyone can still access your app using HTTP, but you can redirect al
#### [Azure portal](#tab/Azure-portal)
-In your app page, in the navigation, select **Custom Domain**. Then, set **HTTPS Only**, to `True`.
+In your app page, in the navigation, select **Custom Domain**. Then, set **HTTPS Only** to `Yes`.
#### [Azure CLI](#tab/Azure-CLI)
When the operation is complete, navigate to any of the HTTPS URLs that point to
- [What is Azure Key Vault?](../key-vault/general/overview.md) - [Import a certificate](../key-vault/certificates/certificate-scenarios.md#import-a-certificate)-- [Launch your Spring Cloud App by using the Azure CLI](./quickstart.md)
+- [Use TLS/SSL certificates](./how-to-use-tls-certificate.md)
spring-apps How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
+**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise
This article shows you how to monitor of Spring Boot applications in Azure Spring Apps with the New Relic Java agent.
spring-apps How To Use Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-application-live-view.md
Use the following steps to deploy an app and monitor it in Application Live View
--output tsv ```
+ You can also access the Application Live View using Visual Studio Code (VS Code). For more information, see the [Use Application Live View in VS Code](#use-application-live-view-in-vs-code) section.
+ ## Manage Application Live View in existing Enterprise plan instances You can enable Application Live View in an existing Azure Spring Apps Enterprise plan instance using the Azure portal or Azure CLI.
az spring dev-tool create \
+## Use Application Live View in VS Code
+
+You can access Application Live View directly in VS Code to monitor your apps in Azure Spring Apps Enterprise tier.
+
+### Prerequisites
+
+- [Visual Studio Code](https://code.visualstudio.com/Download)
+- [Azure Spring Apps extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-spring-cloud)
+
+### View Application Live View dashboard
+
+Use the following steps to view the Application Live View dashboard for a service instance:
+
+1. In Visual Studio Code, open the Azure Spring Apps extension, and then sign in to your Azure account.
+1. Expand the service instance that you want to monitor and right-click to select the service instance.
+1. Select **Open Application Live View** from the menu to open the Application Live View dashboard in your default browser.
+
+ :::image type="content" source="media/how-to-use-application-live-view/visual-studio-code-open-service.png" alt-text="Screenshot of the VS Code extension showing the Open Application Live View option for a service instance." lightbox="media/how-to-use-application-live-view/visual-studio-code-open-service.png":::
+
+### View Application Live View page for an app
+
+Use the following steps to view the Application Live View page for an app:
+
+1. In Visual Studio Code, open the Azure Spring Apps extension, and then sign in to your Azure account.
+1. Expand the service instance and the app that you want to monitor. Right-click the app.
+1. Select **Open Application Live View** from the menu to open the Application Live View page for the app in your default browser.
+
+ :::image type="content" source="media/how-to-use-application-live-view/visual-studio-code-open-app.png" alt-text="Screenshot of the VS Code extension showing the Open Application Live View option for an app." lightbox="media/how-to-use-application-live-view/visual-studio-code-open-app.png":::
+
+### Troubleshoot Application Live View issues
+
+If you try to open Application Live View for a service instance or an app that hasn't enabled Application Live View or exposed a public endpoint, you see an error message.
+
+ :::image type="content" source="media/how-to-use-application-live-view/visual-studio-code-not-enabled.png" alt-text="Screenshot of the error message showing Application Live View not enabled and public endpoint not accessible." lightbox="media/how-to-use-application-live-view/visual-studio-code-not-enabled.png":::
+
+To enable Application Live View and expose public endpoint, use either the Azure portal or the Azure CLI. For more information, see the [Manage Application Live View in existing Enterprise tier instances](#manage-application-live-view-in-existing-enterprise-plan-instances) section.
+ ## Next steps - [Azure Spring Apps](index.yml)
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
**This article applies to:** ✔️ Basic/Standard ❌ Enterprise
-Pet Clinic, as deployed in the default configuration [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md), uses an in-memory database (HSQLDB) that is populated with data at startup. This quickstart explains how to provision and prepare an Azure Database for MySQL instance and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.
+Pet Clinic, as deployed in the default configuration [Quickstart: Build and deploy apps to Azure Spring Apps](./quickstart-deploy-apps.md), uses an in-memory database (HSQLDB) that is populated with data at startup. This quickstart explains how to provision and prepare an Azure Database for MySQL instance and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database.
## Prerequisites An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-## Prepare an Azure Database for MySQL instance
+## Create an Azure Database for MySQL instance
-1. Create an Azure Database for MySQL flexible server using the [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create) command. Replace the placeholders `<database-name>`, `<resource-group-name>`, `<MySQL-flexible-server-name>`, `<admin-username>`, and `<admin-password>` with a name for your new database, the name of your resource group, a name for your new server, and an admin username and password. Use single quotes around the value for `admin-password`.
+Create an Azure Database for MySQL flexible server using the [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create) command. Replace the placeholders `<database-name>`, `<resource-group-name>`, `<MySQL-flexible-server-name>`, `<admin-username>`, and `<admin-password>` with a name for your new database, the name of your resource group, a name for your new server, and an admin username and password. Use single quotes around the value for `admin-password`.
- ```azurecli-interactive
- az mysql flexible-server create \
- --resource-group <resource-group-name> \
- --name <MySQL-flexible-server-name> \
- --database-name <database-name> \
- --public-access 0.0.0.0 \
- --admin-user <admin-username> \
- --admin-password '<admin-password>'
- ```
+```azurecli-interactive
+az mysql flexible-server create \
+ --resource-group <resource-group-name> \
+ --name <MySQL-flexible-server-name> \
+ --database-name <database-name> \
+ --public-access 0.0.0.0 \
+ --admin-user <admin-username> \
+ --admin-password '<admin-password>'
+```
- > [!NOTE]
- > The `Standard_B1ms` SKU is used by default. For pricing details, see [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+> [!NOTE]
+> The `Standard_B1ms` SKU is used by default. For pricing details, see [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
- > [!TIP]
- > The password should be at least eight characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
+> [!TIP]
+> The password should be at least eight characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
## Connect your application to the MySQL database
Use [Service Connector](../service-connector/overview.md) to connect the app hos
az provider register --namespace Microsoft.ServiceLinker ```
-1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information. Use single quotes around the value for MySQL server `secret`.
+1. Run the `az spring connection create` command to create a service connection between the `customers-service` app and the Azure MySQL database. Replace the placeholders for the following settings with your own information. Use single quotes around the value for MySQL server `secret`.
| Setting | Description | |||
Use [Service Connector](../service-connector/overview.md) to connect the app hos
az spring connection create mysql-flexible \ --resource-group <Azure-Spring-Apps-resource-group-name> \ --service <Azure-Spring-Apps-resource-name> \
- --app <app-name> \
+ --app customers-service \
--connection <mysql-connection-name-for-app> \ --target-resource-group <mySQL-server-resource-group> \ --server <server-name> \
Use [Service Connector](../service-connector/overview.md) to connect the app hos
> [!TIP] > If the `az spring` command isn't recognized by the system, check that you have installed the Azure Spring Apps extension by running `az extension add --name spring`.
-### [Portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
1. In the Azure portal, type the name of your Azure Spring Apps instance in the search box at the top of the screen and select your instance.
-1. Under **Settings**, select **Apps** and select the application from the list.
+1. Under **Settings**, select **Apps**, and then select the `customers-service` application from the list.
1. Select **Service Connector** from the left table of contents and select **Create**. :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
Use [Service Connector](../service-connector/overview.md) to connect the app hos
+Repeat these steps to create connections for the `customers-service`, `vets-service`, and `visits-service` applications.
+ ## Check connection to MySQL database ### [Azure CLI](#tab/azure-cli)
-Run the `az spring connection validate` command to show the status of the connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information.
+Run the `az spring connection validate` command to show the status of the connection between the `customers-service` app and the Azure MySQL database. Replace the placeholders with your own information.
```azurecli-interactive az spring connection validate \ --resource-group <Azure-Spring-Apps-resource-group-name> \ --service <Azure-Spring-Apps-resource-name> \
- --app <app-name> \
+ --app customers-service \
--connection <mysql-connection-name-for-app> \ --output table ```
Azure Spring Apps connections are displayed under **Settings > Service Connector
+Repeat these instructions to validate the connections for the `customers-service`, `vets-service`, and `visits-service` applications.
+
+## Update apps to use MySQL profile
+
+The following section explains how to update the apps to connect to the MySQL database.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the following command to set an environment variable to activate the `mysql` profile for the `customers-service` app:
+
+```azurecli
+az spring app update \
+ --resource-group <Azure-Spring-Apps-resource-group-name> \
+ --service <Azure-Spring-Apps-resource-name> \
+ --name customers-service \
+ --env SPRING_PROFILES_ACTIVE=mysql
+```
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to set an environment variable to activate the `mysql` profile for the `customers-service` app:
+
+1. Go to the Azure Spring Apps instance overview page, select **Apps** from the navigation menu, and then select the **customers-service** application from the list.
+
+1. On the **customers-service Overview** page, select **Configuration** from the navigation menu.
+
+1. On the **Configuration** page, select **Environment variables**. Enter `SPRING_PROFILES_ACTIVE` for **Key**, enter `mysql` for **Value**, and then select **Save**.
+
+ :::image type="content" source="media/quickstart-integrate-azure-database-mysql/customers-service-app-configuration.png" alt-text="Screenshot of the Azure portal showing the configuration settings for the customers-service app." lightbox="media/quickstart-integrate-azure-database-mysql/customers-service-app-configuration.png":::
+++
+Repeat these instructions to update app configuration for the `customers-service`, `vets-service`, and `visits-service` applications.
+
+## Validate the apps
+
+To validate the Pet Clinic service and to query records from the MySQL database to confirm the database connection, follow the instructions in the [Verify the services](./quickstart-deploy-apps.md?pivots=programming-language-java#verify-the-services) section of [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+ ## Clean up resources If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete) command, which deletes the resources in the resource group. Replace `<resource-group>` with the name of your resource group.
az group delete --name <resource-group>
## Next steps
-* [Bind an Azure Database for MySQL instance to your application in Azure Spring Apps](how-to-bind-mysql.md)
+* [Bind an Azure Database for MySQL instance to your application in Azure Spring Apps](./how-to-bind-mysql.md)
* [Use a managed identity to connect Azure SQL Database to an app in Azure Spring Apps](./connect-managed-identity-to-azure-sql.md)
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
The following table summarizes the features of the hot, cool, cold, and archive
| | **Hot tier** | **Cool tier** | **Cold tier (preview)** |**Archive tier** | |--|--|--|--|--| | **Availability** | 99.9% | 99% | 99% | 99% |
-| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.999% | 99.999% | 99.999% |
+| **Availability** <br> **(RA-GRS reads)** | 99.99% | 99.9% | 99.9% | 99.9% |
| **Usage charges** | Higher storage costs, but lower access and transaction costs | Lower storage costs, but higher access and transaction costs | Lower storage costs, but higher access and transaction costs | Lowest storage costs, but highest access, and transaction costs | | **Minimum recommended data retention period** | N/A | 30 days<sup>1</sup> | 90 days<sup>1</sup> | 180 days | | **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Milliseconds | Hours<sup>2</sup> |
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Last updated 06/23/2023--++
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 03/13/2023 Last updated : 07/14/2023
If you performed an [account failover](storage-disaster-recovery-guidance.md) fo
During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
-If you initiate a conversion from the Azure portal, the conversion process could take up to 72 hours to begin, and possibly longer if requested by opening a support request.
- If you choose to perform a manual migration, downtime is required but you have more control over the timing of the migration process.
+## Timing and frequency
+
+If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to actually begin. It could take longer to start if you [request a conversion by opening a support request](#support-requested-conversion). If a customer-initiated conversion does not enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
+
+There is no SLA for completion of a conversion. If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions.
+
+After a zone-redundancy conversion, you must wait at least 72 hours before changing the redundancy setting of the storage account again. The temporary hold allows background processes to complete before making another change, ensuring the consistency and integrity of the account. For example, going from LRS to GZRS is a 2-step process. You must add zone redundancy in one operation, then add geo-redundancy in a second. After going from LRS to ZRS, you must wait at least 72 hours before going from ZRS to GZRS.
+ ## Costs associated with changing how data is replicated Ordering from the least to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS.
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
There are two types of service endpoints available for a storage account:
- [Standard endpoints](#standard-endpoints) (recommended). By default, you can create up to 250 storage accounts per region with standard endpoints in a given subscription. With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](../../quotas/storage-account-quota-requests.md). - [Azure DNS zone endpoints](#azure-dns-zone-endpoints-preview) (preview). You can create up to 5000 storage accounts per region with Azure DNS zone endpoints in a given subscription.
-Within a single subscription, you can create accounts with either standard or Azure DNS Zone endpoints, for a maximum of 5250 accounts per region per subscription. With a quota increase, you can crate up to 5500 storage accounts per region per subscription.
+Within a single subscription, you can create accounts with either standard or Azure DNS Zone endpoints, for a maximum of 5250 accounts per region per subscription. With a quota increase, you can create up to 5500 storage accounts per region per subscription.
You can configure your storage account to use a custom domain for the Blob Storage endpoint. For more information, see [Configure a custom domain name for your Azure Storage account](../blobs/storage-custom-domain-name.md).
synapse-analytics Apache Spark Pool Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-pool-configurations.md
-+ Last updated 09/07/2022
synapse-analytics Optimize Write For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/optimize-write-for-apache-spark.md
Last updated 08/03/2022 -+ # The need for optimize write on Apache Spark
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Last updated 07/19/2022 -+ # Synapse runtime for Apache Spark lifecycle and supportability
synapse-analytics Analyze Your Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/analyze-your-workload.md
Last updated 11/03/2021 -
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
Title: Cheat sheet for dedicated SQL pool (formerly SQL DW) description: Find links and best practices to quickly build your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-+ Last updated 11/04/2019--+ # Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Last updated 04/19/2020--+ tags: azure-synapse
synapse-analytics Fivetran Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/fivetran-quickstart.md
Title: "Quickstart: Fivetran and dedicated SQL pool (formerly SQL DW)" description: Get started with Fivetran and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated 10/12/2018--+
synapse-analytics Manage Compute With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/manage-compute-with-azure-functions.md
Last updated 04/27/2018 -
synapse-analytics Massively Parallel Processing Mpp Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/massively-parallel-processing-mpp-architecture.md
Title: Dedicated SQL pool (formerly SQL DW) architecture description: Learn how Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability. -+ Last updated 07/20/2022--+ # Dedicated SQL pool (formerly SQL DW) architecture in Azure Synapse Analytics
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
Title: "Quickstart: Pause and resume compute in dedicated SQL pool via the Azure
description: Use the Azure portal to pause compute for dedicated SQL pool to save costs. Resume compute when you're ready to use the data warehouse. - Last updated 01/05/2023
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
Title: "Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL
description: You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW) compute resources. - Last updated 01/05/2023
synapse-analytics Pause And Resume Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-workspace-powershell.md
Title: "Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse
description: You can use Azure PowerShell to pause and resume dedicated SQL pool compute resources in an Azure Synapse Workspace. - Last updated 02/21/2023
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal" description: You can scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal.--++ Last updated 02/22/2023
synapse-analytics Quickstart Scale Compute Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-tsql.md
Title: "Quickstart: Scale compute in dedicated SQL pool (formerly SQL DW) - T-SQL" description: Scale compute in dedicated SQL pool (formerly SQL DW) using T-SQL and SQL Server Management Studio (SSMS). Scale out compute for better performance, or scale back compute to save costs.--++ Last updated 02/22/2023
synapse-analytics Quickstart Scale Compute Workspace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal" description: Learn how to scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal.--++ Last updated 02/22/2023
synapse-analytics Sql Data Warehouse Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md
Last updated 04/02/2019 - tag: azure-synapse
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
Last updated 06/26/2020 -
synapse-analytics Sql Data Warehouse Continuous Integration And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md
Last updated 02/04/2020 - # Continuous integration and deployment for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-user-defined-schemas.md
Last updated 04/17/2018 -
synapse-analytics Sql Data Warehouse Get Started Analyze With Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-analyze-with-azure-machine-learning.md
Title: Analyze data with Azure Machine Learning description: Use Azure Machine Learning to build a predictive machine learning model based on data stored in Azure Synapse.-+ Last updated 07/15/2020--+ tag: azure-Synapse
synapse-analytics Sql Data Warehouse Get Started Create Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md
Last updated 03/10/2020 -
synapse-analytics Sql Data Warehouse How To Convert Resource Classes Workload Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md
Last updated 08/13/2020 -
synapse-analytics Sql Data Warehouse How To Find Queries Running Beyond Wlm Elapsed Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-find-queries-running-beyond-wlm-elapsed-timeout.md
Title: Identify queries running beyond workload group query execution timeout description: Identify queries that are running beyond the workload groups query execution timeout value. --++
synapse-analytics Sql Data Warehouse How To Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md
Last updated 11/20/2020 -
synapse-analytics Sql Data Warehouse How To Troubleshoot Missed Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-troubleshoot-missed-classification.md
Title: Troubleshoot misclassified workload in a dedicated SQL pool description: Identify and troubleshoot scenarios where workloads are misclassified to unintended workload groups in a dedicated SQL pool in Azure Synapse Analytics. --++
synapse-analytics Sql Data Warehouse Install Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-install-visual-studio.md
Last updated 05/11/2020 - # Getting started with Visual Studio 2019
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Last updated 10/07/2022 -
synapse-analytics Sql Data Warehouse Load From Azure Blob Storage With Polybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md
Last updated 11/20/2020 -
synapse-analytics Sql Data Warehouse Manage Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md
Last updated 11/12/2019 -
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
Title: Pause, resume, scale with REST APIs for dedicated SQL pool (formerly SQL DW) description: Manage compute power for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics through REST APIs.--++
synapse-analytics Sql Data Warehouse Overview Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-integrate.md
Title: Build integrated solutions description: Solution tools and partners that integrate with a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-+ Last updated 04/17/2018--+
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
Last updated 04/17/2018 - tags: azure-synapse
synapse-analytics Sql Data Warehouse Overview Manageability Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manageability-monitoring.md
Last updated 08/27/2018 -
synapse-analytics Sql Data Warehouse Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-ssms.md
Last updated 04/17/2018 -
synapse-analytics Sql Data Warehouse Query Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-visual-studio.md
Last updated 08/15/2019 -
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Last updated 04/17/2018 -
synapse-analytics Sql Data Warehouse Reference Tsql Statements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md
Last updated 05/01/2019 -
synapse-analytics Sql Data Warehouse Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-points.md
Last updated 07/03/2019 -
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
Last updated 08/23/2019 - # Source Control Integration for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
Last updated 03/27/2019 -
synapse-analytics Sql Data Warehouse Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-videos.md
Last updated 02/15/2019 -
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Title: Striim quick start description: Get started quickly with Striim and Azure Synapse Analytics.-+ Last updated 10/12/2018--+
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
Title: Upgrade to the latest generation of dedicated SQL pool (formerly SQL DW) description: Upgrade Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) to latest generation of Azure hardware and storage architecture.-+ Last updated 02/19/2019-+
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Title: Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW) description: Recommendations on choosing the ideal number of data warehouse units (DWUs) to optimize price and performance, and how to change the number of units.-+ Last updated 11/22/2019--+
synapse-analytics Best Practices Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-dedicated-sql-pool.md
Title: Best practices for dedicated SQL pools description: Recommendations and best practices you should know as you work with dedicated SQL pools. -+ Last updated 09/22/2022--+ # Best practices for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-architecture.md
Title: Synapse SQL architecture description: Learn how Azure Synapse SQL combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability. -+ Last updated 11/01/2022--+
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
Here's how to create a host pool using the Azure portal.
| Host pool type | Select whether your host pool will be Personal or Pooled.<br /><br />If you select **Personal**, a new option will appear for **Assignment type**. Select either **Automatic** or **Direct**.<br /><br />If you select **Pooled**, two new options will appear for **Load balancing algorithm** and **Max session limit**.<br /><br />- For **Load balancing algorithm**, choose either **breadth-first** or **depth-first**, based on your usage pattern.<br /><br />- For **Max session limit**, enter the maximum number of users you want load-balanced to a single session host. | > [!TIP]
- > Once you've completed this tab, you can continue to optionally create session hosts, a workspace, register the default desktop application group from this host pool, and enable diagnostics settings. Alternatively, if you want to create and configure these separately, select **Next: Review + create** and go to step 9.
+ > Once you've completed this tab, you can continue to optionally configure networking, create session hosts, a workspace, register the default desktop application group from this host pool, and enable diagnostics settings. Alternatively, if you want to create and configure these separately, select **Next: Review + create** and go to step 10.
+
+1. *Optional*: On the **Networking** tab, select how end users and session hosts will connect to the Azure Virtual Desktop service. You also need to configure Azure Private Link to use private access. For more information, see [Azure Private Link with Azure Virtual Desktop](private-link-overview.md).
+
+ | Parameter | Value/Description |
+ |--|--|
+ | **Enable public access from all networks** | End users can access the feed and session hosts securely over the public internet or the private endpoints. |
+ | **Enable public access for end users, use private access for session hosts** | End users can access the feed securely over the public internet but must use private endpoints to access session hosts. |
+ | **Disable public access and use private access** | End users can only access the feed and session hosts over the private endpoints. |
+
+ Once you've completed this tab, select **Next: Virtual Machines**.
1. *Optional*: If you want to add session hosts in this process, on the **Virtual machines** tab, complete the following information:
virtual-desktop Enable Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enable-gpu-acceleration.md
To verify that Remote Desktop is using GPU-accelerated encoding:
2. Launch the Event Viewer and navigate to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational** 3. To determine if GPU-accelerated encoding is used, look for event ID 170. If you see "AVC hardware encoder enabled: 1" then GPU encoding is used.
+> [!TIP]
+> If you're connecting to your session host outside of Azure Virtual Desktop for testing GPU acceleration, the logs will instead be stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+ ## Verify fullscreen video encoding To verify that Remote Desktop is using fullscreen video encoding:
To verify that Remote Desktop is using fullscreen video encoding:
2. Launch the Event Viewer and navigate to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational** 3. To determine if fullscreen video encoding is used, look for event ID 162. If you see "AVC Available: 1 Initial Profile: 2048" then AVC 444 is used.
+> [!TIP]
+> If you're connecting to your session host outside of Azure Virtual Desktop for testing GPU acceleration, the logs will instead be stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+ ## Next steps These instructions should have you up and running with GPU acceleration on one session host (one VM). Some additional considerations for enabling GPU acceleration across a larger host pool:
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
Title: Use Azure Private Link with Azure Virtual Desktop preview - Azure
-description: Learn how Azure Private Link (preview) can help you keep network traffic private.
-
+ Title: Azure Private Link with Azure Virtual Desktop - Azure
+description: Learn about using Private Link with Azure Virtual Desktop to privately connect to your remote resources.
+ Previously updated : 12/06/2022-- Last updated : 07/10/2023+
-# Use Azure Private Link with Azure Virtual Desktop (preview)
+# Azure Private Link with Azure Virtual Desktop
-> [!IMPORTANT]
-> Private Link for Azure Virtual Desktop is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-You can use a [private endpoint](../private-link/private-endpoint-overview.md) from Azure Private Link with Azure Virtual Desktop to privately connect to your remote resources. With Private Link, traffic between your virtual network and the service travels the Microsoft "backbone" network, which means you'll no longer need to expose your service to the public internet. Keeping traffic within this "backbone" network improves security and keeps your data safe. This article describes how Private Link can help you secure your Azure Virtual Desktop environment.
+You can use [Azure Private Link](../private-link/private-link-overview.md) with Azure Virtual Desktop to privately connect to your remote resources. By creating a [private endpoint](../private-link/private-endpoint-overview.md), traffic between your virtual network and the service remains on the Microsoft network, so you no longer need to expose your service to the public internet. You also use a VPN or ExpressRoute for your users with the Remote Desktop client to connect to the virtual network. Keeping traffic within the Microsoft network improves security and keeps your data safe. This article describes how Private Link can help you secure your Azure Virtual Desktop environment.
## How does Private Link work with Azure Virtual Desktop? Azure Virtual Desktop has three workflows with three corresponding resource types of private endpoints: -- The first workflow, initial feed discovery, lets the client discover all workspaces assigned to a user. To enable this process, you must create a single private endpoint to the global sub-resource of any workspace. However, you can only create one private endpoint in your entire Azure Virtual Desktop deployment. This endpoint creates Domain Name System (DNS) entries and private IP routes for the global fully-qualified domain name (FQDN) needed for initial feed discovery. This connection becomes a single, shared route for all clients to use.
+1. **Initial feed discovery**: lets the client discover all workspaces assigned to a user. To enable this process, you must create a single private endpoint to the *global* sub-resource to any workspace. However, you can only create one private endpoint in your entire Azure Virtual Desktop deployment. This endpoint creates Domain Name System (DNS) entries and private IP routes for the global fully qualified domain name (FQDN) needed for initial feed discovery. This connection becomes a single, shared route for all clients to use.
+
+2. **Feed download**: the client downloads all connection details for a specific user for the workspaces that host their application groups. You create a private endpoint for the *feed* sub-resource for each workspace you want to use with Private Link.
+
+3. **Connections to host pools**: every connection to a host pool has two sides - clients and session host virtual machines (VMs). To enable connections, you need to create a private endpoint for the *connection* sub-resource for each host pool you want to use with Private Link.
-- The next workflow is feed download, which is when the client downloads all connection details for a specific user for the workspaces that host their application groups. To enable this feed download, you must create a private endpoint for each workspace you want to enable. This endpoint will be to the workspace sub-resource of the specific workspace you want to allow.
+The following table summarizes the private endpoints you need to create:
-- The final workflow involves making connections to host pools. Every connection has two sides: clients and session host VMs. To enable connections, you need to create a private endpoint for the host pool sub-resource of any host pool you want to allow.
+| Purpose | Resource type | Target sub-resource | Quantity |
+|--|--|--|--|
+| Initial feed discovery | Microsoft.DesktopVirtualization/workspaces | global | One for all your Azure Virtual Desktop deployments |
+| Feed download | Microsoft.DesktopVirtualization/workspaces | feed | One per workspace |
+| Connections to host pools | Microsoft.DesktopVirtualization/hostpools | connection | One per host pool |
-You can either share these private endpoints across your network topology or you can isolate your virtual networks (VNets) so that each has their own private endpoint to the host pool or workspace.
+You can either share these private endpoints across your network topology or you can isolate your virtual networks so that each has their own private endpoint to the host pool or workspace.
-The following diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service:
+The following high-level diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service:
## Supported scenarios
-When adding Private Link, you can connect to Azure Virtual Desktop in the following ways:
+When adding Private Link with Azure Virtual Desktop, you have the following options to connect to Azure Virtual Desktop. Each can be enabled or disabled depending on your requirements.
-- Both the clients and the session host VMs use public routes, which don't require Private Link.-- The clients use public routes while session host VMs use private routes. - Both clients and session host VMs use private routes.
+- Clients use public routes while session host VMs use private routes.
+- Both clients and session host VMs use public routes. Private Link isn't used.
-> [!NOTE]
-> If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. The entire TCP dynamic port range is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource.
+> [!IMPORTANT]
+> - A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This in turn enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. We recommend you create an unused placeholder workspace for the global sub-resource.
>
-> If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
+> - If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. The entire TCP dynamic port range is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource. If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
-## Public preview limitations
+## Limitations
-The public preview of using Private Link with Azure Virtual Desktop has the following limitations:
+Private Link with Azure Virtual Desktop has the following limitations:
-- You'll need to [re-register your resource provider](private-link-setup.md#re-register-your-resource-provider) in order to use Private Link.
+- You need to [enable the feature](private-link-setup.md#enable-the-feature) on each Azure subscription you want to Private Link with Azure Virtual Desktop.
- You can't use the [manual connection approval method](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow) when using Private Link with Azure Virtual Desktop. We're aware of this issue and are working on fixing it. -- All Azure Virtual Desktop clients are compatible with Private Link, but we currently only offer troubleshooting support for the web client version of Private Link.--- A private endpoint to the global sub-resource of any workspace controls the shared FQDN for initial feed discovery. This control enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. Instead of deleting the workspace, you should create an unused placeholder workspace to terminate the global endpoint.--- Validation for data path access checks, particularly those that prevent exfiltration, are still being validated. To help us with validation, the preview version of this feature will collect feedback from customers regarding their exfiltration requirements, particularly their preferences for how to audit and analyze findings. We don't recommend or support using the preview version of this feature for production data traffic.
+- All [Remote Desktop clients to connect to Azure Virtual Desktop](users/remote-desktop-clients-overview.md) can be used with Private Link, but we currently only offer troubleshooting support for the web client with Private Link.
-- After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (RDAgentBootLoader) service on the session host VM. You'll also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart the session host.
+- After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (RDAgentBootLoader) service on the session host VM. You also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart the session host.
-- Service tags are used by the Azure Virtual Desktop service for agent monitoring traffic. The service automatically creates these tags.
+- Service tags are used by the Azure Virtual Desktop service for agent monitoring traffic. These tags are created automatically.
-- The public preview doesn't support using both Private Link and [RDP Shortpath](./shortpath.md) at the same time.
+- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't supported.
## Next steps -- Learn about how to set up Private Link with Azure Virtual Desktop at [Set up Private Link for Azure Virtual Desktop](private-link-setup.md).
+- Learn how to [Set up Private Link with Azure Virtual Desktop](private-link-setup.md).
- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder). - For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).-- Understand how connectivity for the Azure Virtual Desktop service works at[Azure Virtual Desktop network connectivity](network-connectivity.md).-- See the [Required URL list](safe-url-list.md) for the list of URLs you'll need to unblock to ensure network access to the Azure Virtual Desktop service.
+- Understand [Azure Virtual Desktop network connectivity](network-connectivity.md).
+- See the [Required URL list](safe-url-list.md) for the list of URLs you need to unblock to ensure network access to the Azure Virtual Desktop service.
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Title: Set up Private Link for Azure Virtual Desktop preview - Azure
-description: How to set up Private Link for Azure Virtual Desktop (preview).
-
+ Title: Set up Private Link with Azure Virtual Desktop - Azure
+description: Learn how to set up Private Link with Azure Virtual Desktop to privately connect to your remote resources.
+ Previously updated : 06/15/2023-- Last updated : 07/13/2023+
-# Set up Private Link for Azure Virtual Desktop (preview)
+# Set up Private Link with Azure Virtual Desktop
+
+This article shows you how to set up Private Link with Azure Virtual Desktop to privately connect to your remote resources. For more information about using Private Link with Azure Virtual Desktop, including limitations, see [Azure Private Link with Azure Virtual Desktop](private-link-overview.md).
+
+## Prerequisites
+
+In order to use Private Link with Azure Virtual Desktop, you need the following things:
+
+- An existing [host pool](create-host-pool.md) with [session hosts](add-session-hosts-host-pool.md), [application group, and workspace](create-application-group-workspace.md).
+
+- An existing [virtual network](../virtual-network/manage-virtual-network.md) and [subnet](../virtual-network/virtual-network-manage-subnet.md) you want to use for private endpoints.
+
+- The [required Azure role-based access control permissions to create private endpoints](../private-link/rbac-permissions.md).
+
+- If you're using the [Remote Desktop client for Windows](./users/connect-windows.md), you must use version 1.2.4066 or later to connect using a private endpoint.
+
+- If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+
+## Enable the feature
+
+To use of Private Link with Azure Virtual Desktop, first you need to re-register the *Microsoft.DesktopVirtualization* resource provider and register the *Azure Virtual Desktop Private Link* feature on your Azure subscription.
> [!IMPORTANT]
-> Private Link for Azure Virtual Desktop is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> You need to re-register the resource provider and register the feature for each subscription you want to use Private Link with Azure Virtual Desktop.
-This article will show you how to set up Private Link for Azure Virtual Desktop (preview) in your Azure Virtual Desktop deployment. For more information about what Private Link can do for your deployment and the limitations of the public preview version, see [Private Link for Azure Virtual Desktop (preview)](private-link-overview.md).
+### Re-register the resource provider
-## Prerequisites
+To re-register the *Microsoft.DesktopVirtualization* resource provider:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
-In order to use Private Link in your Azure Virtual Desktop deployment, you'll need the following things:
+1. In the search bar, enter **Subscriptions** and select the matching service entry.
-- An Azure account with an active subscription.-- An Azure Virtual Desktop deployment with service objects, such as host pools, application groups, and [workspaces](environment-setup.md#workspaces).-- The [required permissions to use Private Link](../private-link/rbac-permissions.md).
+1. Select the name of your subscription, then in the section **Settings**, select **Resource providers**.
->[!IMPORTANT]
->There's currently a bug in version 1.2.3918 of the Remote Desktop client for Windows that causes a client regression when you use Private Link. In order to use Private Link in your deployment, you must use a version later than 1.2.3918. Using an earlier version of the Remote Desktop client can potentially cause security issues. We don't recommend using version 1.2.3918 for environments or VMs that you aren't using to preview Private Link.
+1. Search for and select **Microsoft.DesktopVirtualization**, then select **Re-register**.
-### Re-register your resource provider
+1. Verify that the status of *Microsoft.DesktopVirtualization* is **Registered**.
-In the public preview version of Private Link, after you create your resources, you'll need to re-register them to your resource provider before you can start using Private Link. Re-registering allows the service to download and assign the new roles that will let you use this feature.
+### Register the feature
-To re-register your resource provider:
+To register the *Azure Virtual Desktop Private Link* feature:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Subscriptions**.
+1. In the search bar, enter **Subscriptions** and select the matching service entry.
-1. Select the name of your subscription.
+1. Select the name of your subscription, then in the **Settings** section, select **Preview features**.
-1. Select **Resource providers**.
+1. Select the drop-down list for the filter **Type** and set it to **Microsoft.DesktopVirtualization**.
-1. Search for **Microsoft.DesktopVirtualization**.
+1. Select **Azure Virtual Desktop Private Link**, then select **Register**.
-1. Select **Microsoft.DesktopVirtualization**, then select **Re-register**.
+## Create private endpoints
-1. Verify that the status of Microsoft.DesktopVirtualization is **Registered**.
+During the setup process, you create private endpoints to the following resources:
-## Enable preview content on your Azure subscription
+| Purpose | Resource type | Target sub-resource | Quantity | Private DNS zone name |
+|--|--|--|--|--|
+| Connections to host pools | Microsoft.DesktopVirtualization/hostpools | connection | One per host pool | `privatelink.wvd.microsoft.com` |
+| Feed download | Microsoft.DesktopVirtualization/workspaces | feed | One per workspace | `privatelink.wvd.microsoft.com` |
+| Initial feed discovery | Microsoft.DesktopVirtualization/workspaces | global | **Only one for all your Azure Virtual Desktop deployments** | `privatelink-global.wvd.microsoft.com` |
-In order to use Private Link, you'll need to register your Azure subscription to use Private Link. To register your subscription:
+### Connections to host pools
-1. Sign in to the [Azure portal](https://portal.azure.com).
+To create a private endpoint for the *connection* sub-resource for connections to a host pool, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to create a private endpoint for the *connection* sub-resource for connections to a host pool using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry to go to the Azure Virtual Desktop overview.
+
+1. Select **Host pools**, then select the name of the host pool for which you want to create a *connection* sub-resource.
+
+1. From the host pool overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**.
+
+1. On the **Basics** tab, complete the following information:
-1. In the search box, enter and select **Subscriptions**.
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. |
+ | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. |
+ | Name | Enter a name for the new private endpoint. |
+ | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. |
+ | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint is deployed. This must be the same region as your virtual network and session hosts. |
+
+ Once you've completed this tab, select **Next: Resource**.
+
+1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **connection**. Once you've completed this tab, select **Next: Virtual Network**.
+
+1. On the **Virtual Network** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). |
+ | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. |
+ | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. |
+
+ Once you've completed this tab, select **Next: DNS**.
+
+1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+
+ Once you've completed this tab, select **Next: Tags**.
+
+1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment.
+
+1. Select **Create** to create the private endpoint for the connection sub-resource.
+
+# [Azure CLI](#tab/cli)
+
+Here's how to create a private endpoint for the *connection* sub-resource used for connections to a host pool using the [network](/cli/azure/network) and [desktopvirtualization](/cli/azure/desktopvirtualization) extensions for Azure CLI.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. Create a Private Link service connection and the private endpoint for a host pool with the connection sub-resource by running the commands in one of the following examples.
+
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network and session hosts.
+ location=<Location>
+
+ # Get the resource ID of the host pool
+ hostPoolId=$(az desktopvirtualization hostpool show \
+ --name <HostPoolName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $hostPoolId \
+ --group-id connection \
+ --output table
+ ```
+
+ 1. To create a private endpoint with statically allocated IP addresses:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network and session hosts.
+ location=<Location>
+
+ # Get the resource ID of the host pool
+ hostPoolId=$(az desktopvirtualization hostpool show \
+ --name <HostPoolName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Store each private endpoint IP configuration in a variable
+ ip1={name:ipconfig1,group-id:connection,member-name:broker,private-ip-address:<IPAddress>}
+ ip2={name:ipconfig2,group-id:connection,member-name:diagnostics,private-ip-address:<IPAddress>}
+ ip3={name:ipconfig3,group-id:connection,member-name:gateway-ring-map,private-ip-address:<IPAddress>}
+ ip4={name:ipconfig4,group-id:connection,member-name:web,private-ip-address:<IPAddress>}
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $hostPoolId \
+ --group-id connection \
+ --ip-configs [$ip1,$ip2,$ip3,$ip4] \
+ --output table
+ ```
+
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+
+ ```output
+ CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup
+ - - -- -
+ uksouth endpoint-hp01 Succeeded privatelink
+ ```
+
+3. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone).
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to create a private endpoint for the *connection* sub-resource used for connections to a host pool using the [Az.Network](/powershell/module/az.network/) and [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell modules.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
-1. Select the name of your subscription.
-1. In the menu on the left side of the screen, look under **Settings** and select **Preview features**.
+2. Get the details of the virtual network and subnet you want to use for the private endpoint and store them in a variable by running the following commands:
+
+ ```azurepowershell
+ # Get the subnet details for the virtual network
+ $subnet = (Get-AzVirtualNetwork -Name <VNetName> -ResourceGroupName <ResourceGroupName>).Subnets | ? Name -eq <SubnetName>
+ ```
+
+3. Create a Private Link service connection for a host pool with the connection sub-resource by running the following commands.
+
+ ```azurepowershell
+ # Get the resource ID of the host pool
+ $hostPoolId = (Get-AzWvdHostPool -Name <HostPoolName> -ResourceGroupName <ResourceGroupName>).Id
+
+ # Create the service connection
+ $parameters = @{
+ Name = '<ServiceConnectionName>'
+ PrivateLinkServiceId = $hostPoolId
+ GroupId = 'connection'
+ }
+
+ $serviceConnection = New-AzPrivateLinkServiceConnection @parameters
+ ```
+
+4. Finally, create the private endpoint by running the commands in one of the following examples.
+
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network and session hosts.
+ $location = '<Location>'
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
+
+ 1. To create a private endpoint with statically allocated IP addresses:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network and session hosts.
+ $location = '<Location>'
+
+ # Create a hash table for each private endpoint IP configuration
+ $ip1 = @{
+ Name = 'ipconfig1'
+ GroupId = 'connection'
+ MemberName = 'broker'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ $ip2 = @{
+ Name = 'ipconfig2'
+ GroupId = 'connection'
+ MemberName = 'diagnostics'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ $ip3 = @{
+ Name = 'ipconfig3'
+ GroupId = 'connection'
+ MemberName = 'gateway-ring-map'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ $ip4 = @{
+ Name = 'ipconfig4'
+ GroupId = 'connection'
+ MemberName = 'web'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ # Create the private endpoint IP configurations
+ $ipConfig1 = New-AzPrivateEndpointIpConfiguration @ip1
+ $ipConfig2 = New-AzPrivateEndpointIpConfiguration @ip2
+ $ipConfig3 = New-AzPrivateEndpointIpConfiguration @ip3
+ $ipConfig4 = New-AzPrivateEndpointIpConfiguration @ip4
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ IpConfiguration = $ipConfig1, $ipConfig2, $ipConfig3, $ipConfig4
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
+
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+
+ ```output
+ ResourceGroupName Name Location ProvisioningState Subnet
+ -- - -- --
+ privatelink endpoint-hp01 uksouth Succeeded
+ ```
+
+5. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone).
+++
+> [!IMPORTANT]
+> You need to create a private endpoint for the connection sub-resource for each host pool you want to use with Private Link.
+++
+### Feed download
+
+To create a private endpoint for the *feed* sub-resource for a workspace, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace for which you want to create a *feed* sub-resource.
+
+1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**.
+
+1. On the **Basics** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. |
+ | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. |
+ | Name | Enter a name for the new private endpoint. |
+ | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. |
+ | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint is deployed. This must be the same region as your virtual network. |
+
+ Once you've completed this tab, select **Next: Resource**.
+
+1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **feed**. Once you've completed this tab, select **Next: Virtual Network**.
+
+1. On the **Virtual Network** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). |
+ | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. |
+ | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. |
+
+ Once you've completed this tab, select **Next: DNS**.
+
+1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+
+ Once you've completed this tab, select **Next: Tags**.
+
+1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment.
+
+1. Select **Create** to create the private endpoint for the feed sub-resource.
+
+# [Azure CLI](#tab/cli)
+
+1. In the same CLI session, create a Private Link service connection and the private endpoint for a workspace with the feed sub-resource by running the following commands.
+
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network.
+ location=<Location>
+
+ # Get the resource ID of the workspace
+ workspaceId=$(az desktopvirtualization workspace show \
+ --name <WorkspaceName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $workspaceId \
+ --group-id feed \
+ --output table
+ ```
+
+ 1. To create a private endpoint with statically allocated IP addresses:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network.
+ location=<Location>
+
+ # Get the resource ID of the workspace
+ workspaceId=$(az desktopvirtualization workspace show \
+ --name <WorkspaceName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Store each private endpoint IP configuration in a variable
+ ip1={name:ipconfig1,group-id:feed,member-name:web-r1,private-ip-address:<IPAddress>}
+ ip2={name:ipconfig2,group-id:feed,member-name:web-r0,private-ip-address:<IPAddress>}
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $workspaceId \
+ --group-id feed \
+ --ip-configs [$ip1,$ip2] \
+ --output table
+ ```
+
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+
+ ```output
+ CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup
+ - - -- -
+ uksouth endpoint-ws01 Succeeded privatelink
+ ```
+
+1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone).
+
+# [Azure PowerShell](#tab/powershell)
+
+1. In the same PowerShell session, create a Private Link service connection for a workspace with the feed sub-resource by running the following commands. In these examples, the same virtual network and subnet are used.
+
+ ```azurepowershell
+ # Get the resource ID of the workspace
+ $workspaceId = (Get-AzWvdWorkspace -Name <WorkspaceName> -ResourceGroupName <ResourceGroupName>).Id
+
+ # Create the service connection
+ $parameters = @{
+ Name = '<ServiceConnectionName>'
+ PrivateLinkServiceId = $workspaceId
+ GroupId = 'feed'
+ }
+
+ $serviceConnection = New-AzPrivateLinkServiceConnection @parameters
+ ```
+
+1. Finally, create the private endpoint by running the commands in one of the following examples.
+
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network.
+ $location = '<Location>'
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
-1. In the search box that opens, enter **Private**.
+ 1. To create a private endpoint with a statically allocated IP address:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network.
+ $location = '<Location>'
+
+ # Create a hash table for each private endpoint IP configuration
+ $ip1 = @{
+ Name = 'ipconfig1'
+ GroupId = 'feed'
+ MemberName = 'web-r1'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ $ip2 = @{
+ Name = 'ipconfig2'
+ GroupId = 'feed'
+ MemberName = 'web-r0'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ # Create the private endpoint IP configurations
+ $ipConfig1 = New-AzPrivateEndpointIpConfiguration @ip1
+ $ipConfig2 = New-AzPrivateEndpointIpConfiguration @ip2
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ IpConfiguration = $ipConfig1, $ipConfig2
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
-1. Select **Azure Virtual Desktop Private Link Public Preview**.
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
-1. Select **Register**.
+ ```output
+ ResourceGroupName Name Location ProvisioningState Subnet
+ -- - -- --
+ privatelink endpoint-ws01 uksouth Succeeded
+ ```
-Once you select **Register**, you'll be able to use Private Link.
+1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone).
-## Create a placeholder workspace
+
-A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This control enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. Instead of deleting the workspace, you should create an unused placeholder workspace to terminate the global endpoint before you start using Private Link. To create a workspace, follow the instructions in [Workspace information](create-host-pools-azure-marketplace.md#workspace-information).
+> [!IMPORTANT]
+> You need to a create private endpoint for the feed sub-resource for each workspace you want to use with Private Link.
-## Set up Private Link in the Azure portal
+### Initial feed discovery
-Now, let's set up Private Link for your host pool. During the setup process, you'll create private endpoints to the following resources:
+To create a private endpoint for the *global* sub-resource used for the initial feed discovery, select the relevant tab for your scenario and follow the steps.
-| Resource type | Target sub-resource | Quantity |
-|--|--|
-| Microsoft.DesktopVirtualization/workspaces | global | One for all Azure Virtual Desktop deployments |
-| Microsoft.DesktopVirtualization/workspaces | feed | One per workspace |
-| Microsoft.DesktopVirtualization/hostpools | connection | One per host pool |
+> [!IMPORTANT]
+> - Only create one private endpoint for the *global* sub-resource for all your Azure Virtual Desktop deployments.
+>
+> - A private endpoint to the global sub-resource of any workspace controls the shared fully qualified domain name (FQDN) for initial feed discovery. This in turn enables feed discovery for all workspaces. Because the workspace connected to the private endpoint is so important, deleting it will cause all feed discovery processes to stop working. We recommend you create an unused placeholder workspace for the global sub-resource.
-To configure Private Link in the Azure portal:
+# [Portal](#tab/portal)
-1. Open the Azure portal and sign in.
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of a workspace you want to use for the global sub-resource.
-1. Search for and select **Azure Virtual Desktop**.
+ 1. *Optional*: Instead, create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=portal#create-a-workspace).
-1. Go to **Host pools**, then select the name of the host pool you want to use.
+1. From the workspace overview, select **Networking**, then **Private endpoint connections**, and finally **New private endpoint**.
- >[!TIP]
- >You can also start setting up by going to **Private Link Center** > **Private Endpoints** > **Add a private endpoint**.
+1. On the **Basics** tab, complete the following information:
-1. After you've opened the host pool, go to **Networking** > **Private Endpoint connections**.
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the private endpoint in from the drop-down list. |
+ | Resource group | This automatically defaults to the same resource group as your workspace for the private endpoint, but you can also select an alternative existing one from the drop-down list, or create a new one. |
+ | Name | Enter a name for the new private endpoint. |
+ | Network interface name | The network interface name fills in automatically based on the name you gave the private endpoint, but you can also specify a different name. |
+ | Region | This automatically defaults to the same Azure region as the workspace and is where the private endpoint will be deployed. This must be the same region as your virtual network. |
-1. Select **New private endpoint**.
+ Once you've completed this tab, select **Next: Resource**.
-1. In the **Basics** tab, either use the drop-down menus to select the **Subscription** and **Resource group** you want to use or create a new resource group.
+1. On the **Resource** tab, validate the values for *Subscription*, *Resource type*, and *Resource*, then for **Target sub-resource**, select **global**. Once you've completed this tab, select **Next: Virtual Network**.
-1. Next, enter a name for your new private endpoint. The network interface name will fill automatically.
+1. On the **Virtual Network** tab, complete the following information:
-1. Select the **region** your private endpoint will be located in. You must choose the same location as your session host and the virtual network (VNet) you plan to use.
+ | Parameter | Value/Description |
+ |--|--|
+ | Virtual network | Select the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Subnet | Select the subnet of the virtual network you want to create the private endpoint in from the drop-down list. |
+ | Network policy for private endpoints | Select **edit** if you want to choose a subnet network policy. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). |
+ | Private IP configuration | Select **Dynamically allocate IP address** or **Statically allocate IP address**. The address space is from the subnet you selected.<br /><br />If you choose to statically allocate IP addresses, you need to fill in the **Name** and **Private IP** for each listed member. |
+ | Application security group | *Optional*: select an existing application security group for the private endpoint from the drop-down list, or create a new one. You can also add one later. |
-1. When you're done, select **Next: Resource >**.
+ Once you've completed this tab, select **Next: DNS**.
-1. In the **Resource** tab, use the following resource:
-
- - Resource type: **Microsoft.DesktopVirtualization/hostpools**
- - Resource: *your host pool*
- - Target sub-resource: connection
+1. On the **DNS** tab, choose whether you want to use [Azure Private DNS Zone](../dns/private-dns-privatednszone.md) by selecting **Yes** or **No** for **Integrate with private DNS zone**. If you select **Yes**, select the subscription and resource group in which to create the private DNS zone `privatelink-global.wvd.microsoft.com`. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
-1. Select **Next: Virtual Network >**.
+ Once you've completed this tab, select **Next: Tags**.
-1. In the **Virtual Network** tab, make sure the values in the **Virtual Network** and **subnet** fields are correct.
+1. *Optional*: On the **Tags** tab, you can enter any [name/value pairs](../azure-resource-manager/management/tag-resources.md) you need, then select **Next: Review + create**.
-1. In the **Private IP configuration** field, choose whether you want to dynamically or statically allocate IP addresses from the subnet you selected in the previous step.
-
- - If you choose to statically allocate IP addresses, you'll need to fill in the **Name** and **Private IP** for each listed member.
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment.
-1. Next, select an existing application security group or create a new one.
-
- - If you're creating a new application security group, select **Create new**, then enter a name for the new security group.
+1. Select **Create** to create the private endpoint for the global sub-resource.
-1. When you're finished, select **Next: DNS >**.
+# [Azure CLI](#tab/cli)
-1. In the **DNS** tab, in the **Integrate with private DNS zone** field, select **Yes** if you want to integrate with an Azure private DNS zone. The private DNS zone name is `privatelink.wvd.microsoft.com`. Learn more about integration at [Azure Private endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+1. *Optional*: Create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=cli#create-a-workspace).
-1. When you're done, select **Next: Tags >**.
+1. In the same CLI session, create a Private Link service connection and the private endpoint for the workspace with the global sub-resource by running the following commands:
-1. In the **Tags** tab, you can optionally add tags to help the Azure service categorize your resources. If you don't want to add tags, select **Next: Review + create**.
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network.
+ location=<Location>
+
+ # Get the resource ID of the workspace
+ workspaceId=$(az desktopvirtualization workspace show \
+ --name <WorkspaceName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $workspaceId \
+ --group-id global \
+ --output table
+ ```
+
+ 1. To create a private endpoint with statically allocated IP addresses:
+
+ ```azurecli
+ # Specify the Azure region. This must be the same region as your virtual network.
+ location=<Location>
+
+ # Get the resource ID of the workspace
+ workspaceId=$(az desktopvirtualization workspace show \
+ --name <WorkspaceName> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Store each private endpoint IP configuration in a variable
+ ip={name:ipconfig,group-id:global,member-name:web,private-ip-address:<IPAddress>}
+
+ # Create a service connection and the private endpoint
+ az network private-endpoint create \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --location $location \
+ --vnet-name <VNetName> \
+ --subnet <SubnetName> \
+ --connection-name <ConnectionName> \
+ --private-connection-resource-id $workspaceId \
+ --group-id global \
+ --ip-config $ip \
+ --output table
+ ```
+
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+
+ ```output
+ CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup
+ - - -- -
+ uksouth endpoint-global Succeeded privatelink
+ ```
+
+1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink-global.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure CLI, see [Configure the private DNS zone](../private-link/create-private-endpoint-cli.md#configure-the-private-dns-zone).
+
+# [Azure PowerShell](#tab/powershell)
+
+1. *Optional*: Create a placeholder workspace to terminate the global endpoint by following the instructions to [Create a workspace](create-application-group-workspace.md?tabs=powershell#create-a-workspace).
+
+1. In the same PowerShell session, create a Private Link service connection for the workspace with the global sub-resource by running the following commands. In these examples, the same virtual network and subnet are used.
+
+ ```azurepowershell
+ # Get the resource ID of the workspace
+ $workspaceId = (Get-AzWvdWorkspace -Name <WorkspaceName> -ResourceGroupName <ResourceGroupName>).Id
+
+ # Create the service connection
+ $parameters = @{
+ Name = '<ServiceConnectionName>'
+ PrivateLinkServiceId = $workspaceId
+ GroupId = 'global'
+ }
+
+ $serviceConnection = New-AzPrivateLinkServiceConnection @parameters
+ ```
+
+1. Finally, create the private endpoint by running the commands in one of the following examples.
+
+ 1. To create a private endpoint with a dynamically allocated IP address:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network.
+ $location = '<Location>'
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
-1. Review the details of your private endpoint. If everything looks good, select **Create** and wait for the deployment to finish.
+ 1. To create a private endpoint with a statically allocated IP address:
+
+ ```azurepowershell
+ # Specify the Azure region. This must be the same region as your virtual network.
+ $location = '<Location>'
+
+ $ip = @{
+ Name = '<IPConfigName>'
+ GroupId = 'global'
+ MemberName = 'web'
+ PrivateIPAddress = '<IPAddress>'
+ }
+
+ $ipConfig = New-AzPrivateEndpointIpConfiguration @ip
+
+ # Create the private endpoint
+ $parameters = @{
+ Name = '<PrivateEndpointName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ Location = $location
+ Subnet = $subnet
+ PrivateLinkServiceConnection = $serviceConnection
+ IpConfiguration = $ipconfig
+ }
+
+ New-AzPrivateEndpoint @parameters
+ ```
-1. Now, repeat the process to create private endpoints for your resources. Return to step 3, but select **Workspaces** instead of host pools and use the following resources, then follow the rest of the steps until the end.
+ Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
- - Resource type: **Microsoft.DesktopVirtualization/workspaces**
- - Resource: *your placeholder workspace*
- - Target sub-resource: global
+ ```output
+ ResourceGroupName Name Location ProvisioningState Subnet
+ -- - -- --
+ privatelink endpoint-global uksouth Succeeded
+ ```
- - Resource type: **Microsoft.DesktopVirtualization/workspaces**
- - Resource: *your workspace*
- - Target sub-resource: feed
+1. You need to [configure DNS for your private endpoint](../private-link/private-endpoint-dns.md) to resolve the DNS name of the private endpoint in the virtual network. The private DNS zone name is `privatelink-global.wvd.microsoft.com`. For the steps to create and configure the private DNS zone with Azure PowerShell, see [Configure the private DNS zone](../private-link/create-private-endpoint-powershell.md#configure-the-private-dns-zone).
->[!NOTE]
->You'll need to repeat this process to create a private endpoint for every resource you want to put into Private Link.
+ ## Closing public routes
-In addition to creating private routes, you can also control if the Azure Virtual Desktop resource allows traffic to come from public routes.
+Once you've created private endpoints, you can also control if traffic is allowed to come from public routes. You can control this at a granular level using Azure Virtual Desktop, or more broadly using a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) or [Azure Firewall](../firewall/protect-azure-virtual-desktop.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json).
+
+### Control routes with Azure Virtual Desktop
+
+With Azure Virtual Desktop, you can independently control public traffic for workspaces and host pools. Select the relevant tab for your scenario and follow the steps. You can't configure this in Azure CLI. You need to repeat these steps for each workspace and host pool you use with Private Link.
-To control public traffic:
+# [Portal](#tab/portal-2)
-1. Open the Azure portal and sign in.
+#### Workspaces
-1. Search for and select **Azure Virtual Desktop**.
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace to control public traffic.
-1. Go to **Host pools** > **Networking** > **Firewall and virtual networks**.
+1. From the host pool overview, select **Networking**, then select the **Public access** tab.
-1. First, configure the **Allow end users access from public network** setting.
+1. Select one of the following options:
- - If you select the check box, users can connect to the host pool using public internet or private endpoints.
+ | Setting | Description |
+ |--|--|
+ | **Enable public access from all networks** | End users can access the feed over the public internet or the private endpoints. |
+ | **Disable public access and use private access** | End users can only access the feed over the private endpoints. |
- - If you don't select the check box, users can only connect to host pool using private endpoints.
+1. Select **Save**.
-1. Next, configure the **Allow session hosts access from public network** setting.
+#### Host pools
- - If you select the check box, Azure Virtual Desktop session hosts will talk to the Azure Virtual Desktop service over public internet or private endpoints.
+1. From the Azure Virtual Desktop overview, select **Host pools**, then select the name of the host pool to control public traffic.
- - If you don't select the check box, Azure Virtual Desktop session hosts can only talk to the Azure Virtual Desktop service over private endpoint connections.
+1. From the host pool overview, select **Networking**, then select the **Public access** tab.
->[!IMPORTANT]
->Disabling the **Allow session host access from public network** setting won't affect existing sessions. You must restart the session host VM for the change to take effect on the session host network settings.
+1. Select one of the following options:
+
+ | Setting | Description |
+ |--|--|
+ | **Enable public access from all networks** | End users can access the feed and session hosts securely over the public internet or the private endpoints. |
+ | **Enable public access for end users, use private access for session hosts** | End users can access the feed securely over the public internet but must use private endpoints to access session hosts. |
+ | **Disable public access and use private access** | End users can only access the feed and session hosts over the private endpoints. |
+
+1. Select **Save**.
+
+# [Azure PowerShell](#tab/powershell-2)
+
+> [!IMPORTANT]
+> You need to use the preview version of the Az.DesktopVirtualization module to run the following commands. For more information and to download and install the preview module, see [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview).
-## Network security groups
+#### Workspaces
-Follow the directions in [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md) to set up a network security group (NSG). You can use this NSG to block the **WindowsVirtualDesktop** service tag. If you block this service tag, all service traffic will use private routes only.
+1. In the same PowerShell session, you can disable public access and use private access by running the following command:
-When you set up your NSG, you must configure it to allow both the URLs in the [required URL list](safe-url-list.md) and your private endpoints. Make sure to include the URLs for Azure Monitor.
+ ```azurepowershell
+ $parameters = @{
+ Name = '<WorkspaceName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'Disabled'
+ }
+
+ Update-AzWvdWorkspace @parameters
+ ```
+
+1. To enable public access from all networks, run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<WorkspaceName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'Enabled'
+ }
+
+ Update-AzWvdWorkspace @parameters
+ ```
+
+#### Host pools
+
+1. In the same PowerShell session, you can disable public access and use private access by running the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'Disabled'
+ }
+
+ Update-AzWvdHostPool @parameters
+ ```
+
+1. To enable public access from all networks, run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'Enabled'
+ }
+
+ Update-AzWvdHostPool @parameters
+ ```
+
+1. To use public access for end users, but use private access for session hosts, run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'EnabledForSessionHostsOnly'
+ }
+
+ Update-AzWvdHostPool @parameters
+ ```
+
+1. To use private access for end users, but use public access for session hosts, run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ PublicNetworkAccess = 'EnabledForClientsOnly'
+ }
+
+ Update-AzWvdHostPool @parameters
+ ```
+++
+> [!IMPORTANT]
+> Changing access for session hosts won't affect existing sessions. You must restart the session host virtual machines for the change to take effect.
-> [!NOTE]
-> If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. The entire TCP dynamic port range is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource.
+### Block public routes with network security groups or Azure Firewall
+
+If you're using [network security groups](../virtual-network/network-security-groups-overview.md) or [Azure Firewall](../firewall/overview.md) to control connections from user client devices or your session hosts to the private endpoints, you can use the **WindowsVirtualDesktop** service tag to block traffic from the public internet. If you block public internet traffic using this service tag, all service traffic uses private routes only.
+
+> [!CAUTION]
+> - Make sure you don't block traffic between your private endpoints and the addresses in the [required URL list](safe-url-list.md).
>
-> If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
+> - Don't block certain ports from either the user client devices or your session hosts to the private endpoint for a host pool resource using the *connection* sub-resource. The entire TCP dynamic port range of *1 - 65535* to the private endpoint is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource. If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
+
+## Validate Private Link with Azure Virtual Desktop
+
+Once you've closed public routes, you should validate that Private Link with Azure Virtual Desktop is working. You can do this by checking the connection state of each private endpoint, the status of your session hosts, and testing your users can refresh and connect to their remote resources.
+
+### Check the connection state of each private endpoint
+
+To check the connection state of each private endpoint, select the relevant tab for your scenario and follow the steps. You should repeat these steps for each workspace and host pool you use with Private Link.
+
+# [Portal](#tab/portal)
+
+#### Workspaces
+
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace for which you want to check the connection state.
+
+1. From the workspace overview, select **Networking**, then **Private endpoint connections**.
+
+1. For the private endpoint listed, check the **Connection state** is **Approved**.
+
+#### Host pools
+
+1. From the Azure Virtual Desktop overview, select **Host pools**, then select the name of the host pool for which you want to check the connection state.
+
+1. From the host pool overview, select **Networking**, then **Private endpoint connections**.
+
+1. For the private endpoint listed, check the **Connection state** is **Approved**.
+
+# [Azure CLI](#tab/cli)
+
+1. In the same CLI session, run the following commands to check the connection state of a workspace or a host pool:
+
+ ```azurecli
+ az network private-endpoint show \
+ --name <PrivateEndpointName> \
+ --resource-group <ResourceGroupName> \
+ --query "{name:name, privateLinkServiceConnectionStates:privateLinkServiceConnections[].privateLinkServiceConnectionState}"
+ ```
+
+ Your output should be similar to the following. Check that the value for **status** is **Approved**.
+
+ ```output
+ {
+ "name": "endpoint-ws01",
+ "privateLinkServiceConnectionStates": [
+ {
+ "actionsRequired": "None",
+ "description": "Auto-approved",
+ "status": "Approved"
+ }
+ ]
+ }
+ ```
+
+# [Azure PowerShell](#tab/powershell)
-## Validate your Private Link deployment
+> [!IMPORTANT]
+> You need to use the preview version of the Az.DesktopVirtualization module to run the following commands. For more information and to download and install the preview module, see [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview).
+
+#### Workspaces
+
+1. In the same PowerShell session, run the following commands to check the connection state of a workspace:
+
+ ```azurepowershell
+ (Get-AzWvdWorkspace -Name <WorkspaceName> -ResourceGroupName <ResourceGroupName).PrivateEndpointConnection | FL Name, PrivateLinkServiceConnectionStateStatus, PrivateLinkServiceConnectionStateDescription, PrivateLinkServiceConnectionStateActionsRequired
+ ```
+
+ Your output should be similar to the following. Check that the value for **PrivateLinkServiceConnectionStateStatus** is **Approved**.
+
+ ```output
+ Name : endpoint-ws01
+ PrivateLinkServiceConnectionStateStatus : Approved
+ PrivateLinkServiceConnectionStateDescription : Auto-approved
+ PrivateLinkServiceConnectionStateActionsRequired : None
+ ```
-To validate your Private Link for Azure Virtual Desktop and make sure it's working:
+#### Host pools
-1. Check to see if your session hosts are registered and functional on the VNet. You can check their health status with [Azure Monitor](insights.md).
+1. In the same PowerShell session, run the following commands to check the connection state of a host pool:
+
+ ```azurepowershell
+ (Get-AzWvdHostPool -Name <HostPoolName> -ResourceGroupName <ResourceGroupName).PrivateEndpointConnection | FL Name, PrivateLinkServiceConnectionStateStatus, PrivateLinkServiceConnectionStateDescription, PrivateLinkServiceConnectionStateActionsRequired
+ ```
+
+ Your output should be similar to the following. Check that the value for **PrivateLinkServiceConnectionStateStatus** is **Approved**.
+
+ ```output
+ Name : endpoint-hp01
+ PrivateLinkServiceConnectionStateStatus : Approved
+ PrivateLinkServiceConnectionStateDescription : Auto-approved
+ PrivateLinkServiceConnectionStateActionsRequired : None
++
-1. Next, test your feed connections to make sure they perform as expected. Use the client and make sure you can add and refresh workspaces.
+### Check the status of your session hosts
-1. Finally, run the following end-to-end tests:
+1. Check the status of your session hosts in Azure Virtual Desktop.
- - Make sure your clients can't connect to Azure Virtual Desktop and your session hosts from public routes.
- - Make sure the session hosts can't connect to Azure Virtual Desktop from public routes.
+ 1. From the Azure Virtual Desktop overview, select **Host pools**, then select the name of the host pool.
+
+ 1. In the **Manage** section, select **Session hosts**.
+
+ 1. Review the list of session hosts and check their status is **Available**.
+
+### Check your users can connect
+
+To test that your users can connect to their remote resources:
+
+1. Use the Remote Desktop client and make sure you can [subscribe to and and refresh workspaces](users/remote-desktop-clients-overview.md).
+
+1. Finally, make sure your users can connect to a remote session.
## Next steps - Learn more about how Private Link for Azure Virtual Desktop at [Use Private Link with Azure Virtual Desktop](private-link-overview.md).+ - Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).-- For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md)-- Understand how connectivity for the Azure Virtual Desktop service works at[Azure Virtual Desktop network connectivity](network-connectivity.md)-- See the [Required URL list](safe-url-list.md) for the list of URLs you'll need to unblock to ensure network access to the Azure Virtual Desktop service.+
+- For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
+
+- Understand how connectivity for the Azure Virtual Desktop service works at[Azure Virtual Desktop network connectivity](network-connectivity.md).
+
+- See the [Required URL list](safe-url-list.md) for the list of URLs you need to unblock to ensure network access to the Azure Virtual Desktop service.
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
If you run into any issues with Start VM On Connect, we recommend you use the Az
If the session host VM doesn't turn on, you'll need to check the health of the VM you tried to turn on as a first step. > [!NOTE]
-> Connecting to a session host outside of Azure Virtual Desktop that is powered off, such as using the MSTSC client, won't start the VM.
+> Connecting to a session host outside of Azure Virtual Desktop that is powered off, such as by using the MSTSC client, won't start the VM.
For other questions, check out the [Start VM on Connect FAQ](start-virtual-machine-connect-faq.md).
virtual-desktop Tutorial Create Connect Personal Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-create-connect-personal-desktop.md
To create a personal host pool, workspace, application group, and session host V
| Host pool type | Select **Personal**. This means that end users have a dedicated assigned session host that they'll always connect to. Selecting **Personal** shows a new option for **Assignment type**. | | Assignment type | Select **Automatic**. Automatic assignment means that a user will automatically get assigned the first available session host when they first sign in, which will then be dedicated to that user. |
- Once you've completed this tab, select **Next: Virtual Machines**.
+ Once you've completed this tab, select **Next: Networking**.
+
+1. On the **Networking** tab, select **Enable public access from all networks**, where end users can access the feed and session hosts securely over the public internet or the private endpoints. Once you've completed this tab, select **Next: Virtual Machines**.
1. On the **Virtual machines** tab, complete the following information:
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
There are several keyboard shortcuts you can use to help use some of the feature
| Windows shortcut | Azure Virtual Desktop shortcut | Description | |--|--|--|
-| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> (Windows) | Shows the Windows Security dialog box. |
-| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>FN</kbd>+<kbd>Control</kbd>+<kbd>Option</kbd>+<kbd>Delete</kbd> (macOS) | Shows the Windows Security dialog box. |
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> (Windows)<br /><br /><kbd>FN</kbd>+<kbd>Control</kbd>+<kbd>Option</kbd>+<kbd>Delete</kbd> (macOS) | Shows the Windows Security dialog box. |
| <kbd>Windows</kbd> | <kbd>ALT</kbd>+<kbd>F3</kbd> | Sends the *Windows* key to the remote session. | | <kbd>ALT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE UP</kbd> | Switches between programs from left to right. | | <kbd>ALT</kbd>+<kbd>SHIFT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE DOWN</kbd> | Switches between programs from right to left. | > [!NOTE]
-> You can copy and paste text only. Files can't be copied or pasted to and from the web client. Additionally, you can only use <kbd>CTRL</kbd>+<kbd>C</kbd> and <kbd>CTRL</kbd>+<kbd>V</kbd> to copy and paste text.
+> - You can copy and paste text only. Files can't be copied or pasted to and from the web client. Additionally, you can only use <kbd>CTRL</kbd>+<kbd>C</kbd> and <kbd>CTRL</kbd>+<kbd>V</kbd> to copy and paste text.
+>
+> - When you're connected to a desktop or app, you can access the resources toolbar at the top of window by using <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>HOME</kbd> on Windows, or <kbd>FN</kbd>+<kbd>Control</kbd>+<kbd>Option</kbd>+<kbd>Home</kbd> on macOS.
#### Input Method Editor
virtual-machines Dcasccv5 Dcadsccv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasccv5-dcadsccv5-series.md
The DCas_cc_v5-series sizes offer a combination of vCPU and memory for most prod
[Live Migration](maintenance-and-updates.md): Not Supported <br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br> [VM Generation Support](generation-2.md): Generation 2 <br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported only for Marketplace Windows image <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
The DCads_cc_v5-series sizes offer a combination of vCPU, memory and temporary s
[Premium Storage](premium-storage-performance.md): Supported <br> [Premium Storage caching](premium-storage-performance.md): Supported <br>
-[Live Migration](maintenance-and-updates.md): Supported <br>
-[Memory Preserving Updates](maintenance-and-updates.md): Supported <br>
+[Live Migration](maintenance-and-updates.md): Not Supported <br>
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br>
[VM Generation Support](generation-2.md): Generation 2 <br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
virtual-machines Ecasccv5 Ecadsccv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasccv5-ecadsccv5-series.md
The ECas_cc_v5-series sizes offer a combination of vCPU and memory for most prod
[Live Migration](maintenance-and-updates.md): Not Supported <br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br> [VM Generation Support](generation-2.md): Generation 2 <br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported only for Marketplace Windows image <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
The ECads_cc_v5-series sizes offer a combination of vCPU, memory and temporary s
[Premium Storage](premium-storage-performance.md): Supported <br> [Premium Storage caching](premium-storage-performance.md): Supported <br>
-[Live Migration](maintenance-and-updates.md): Supported <br>
-[Memory Preserving Updates](maintenance-and-updates.md): Supported <br>
-[VM Generation Support](generation-2.md): Generation 1 and 2 <br>
+[Live Migration](maintenance-and-updates.md): Not Supported <br>
+[Memory Preserving Updates](maintenance-and-updates.md): Not Supported <br>
+[VM Generation Support](generation-2.md): Generation 2 <br>
[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
Previously updated : 03/04/2023 Last updated : 07/13/2023
Process pinning works on HBv2-series VMs because we expose the underlying silico
| CPU | AMD EPYC 7742 | | CPU Frequency (non-AVX) | ~3.1 GHz (single + all cores) | | Memory | 4 GB/core (480 GB total) |
-| Local Disk | 960 GB NVMe (block), 480 GB SSD (page file) |
+| Local Disk | 960 GiB NVMe (block), 480 GB SSD (page file) |
| Infiniband | 200 Gb/s EDR Mellanox ConnectX-6 | | Network | 50 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |
virtual-machines Convert Unmanaged To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-unmanaged-to-managed-disks.md
If you have existing Windows virtual machines (VMs) that use unmanaged disks, yo
* Review [the FAQ about migration to Managed Disks](../faq-for-disks.yml).
+* Ensure the VM is in a healthy sate before converting.
++ [!INCLUDE [virtual-machines-common-convert-disks-considerations](../../../includes/virtual-machines-common-convert-disks-considerations.md)] * The original VHDs and the storage account used by the VM before migration are not deleted. They continue to incur charges. To avoid being billed for these artifacts, delete the original VHD blobs after you verify that the migration is complete. If you need to find these unattached disks in order to delete them, see our article [Find and delete unattached Azure managed and unmanaged disks](find-unattached-disks.md).
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
The following metric is available for virtual hub router within a virtual hub:
| Metric | Description| | | |
-| **Virtual Hub Data Processed** | Data in bytes/second on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router: VNet to VNet (same hub) and VPN/ExpressRoute branch to VNet (interhub).|
+| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router: VNet to VNet (same hub) and VPN/ExpressRoute branch to VNet (interhub).|
#### PowerShell steps
To query, use the following example PowerShell commands. The necessary fields ar
**Step 1:** ```azurepowershell-interactive
-$MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Average
+$MetricInformation = Get-AzMetric -ResourceId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/VirtualHubs/<VirtualHubName>" -MetricName "VirtualHubDataProcessed" -TimeGrain 00:05:00 -StartTime 2022-2-20T01:00:00Z -EndTime 2022-2-20T01:30:00Z -AggregationType Sum
``` **Step 2:**
$MetricInformation.Data
* **Start Time and End Time** - This time is based on UTC, so please ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default.
-* **Aggregation Types** - Average/Minimum/Maximum/Total
- * Average - Total average of bytes/sec per the selected time period.
- * Minimum ΓÇô Minimum bytes that were sent during the selected time grain period.
- * Maximum ΓÇô Maximum bytes that were sent during the selected time grain period.
- * Total ΓÇô Total bytes/sec that were sent during the selected time grain period.
+* **Sum Aggregation Type** - This aggregation type will show you the total number of bytes that traversed the virtual hub router during a selected time period. The **Max** and **Min** aggregation types are not meaningful.
+
### <a name="s2s-metrics"></a>Site-to-site VPN gateway metrics
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
If the update fails for any reason, your hub will be auto recovered to the old v
>[!NOTE] > The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
+> If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 06/28/2023 Last updated : 07/13/2023
Last updated 06/28/2023
This tutorial helps you create and manage a virtual network gateway (VPN gateway) using the Azure portal. The VPN gateway is just one part of a connection architecture to help you securely access resources within a VNet.
-The following diagram shows the virtual network and the VPN gateway that you create using the steps in this article. You can later create different types of connections, such as [Site-to-Site](tutorial-site-to-site-portal.md) and [Point-to-site](point-to-site-about.md) to connect to this virtual network via the VPN gateway.
+* The left side of the diagram shows the virtual network and the VPN gateway that you create using the steps in this article.
+* You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [Site-to-Site](tutorial-site-to-site-portal.md) and [Point-to-site](point-to-site-about.md) connections. See [VPN Gateway design](design.md) to view different design architectures that you can build.
If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md)
For this exercise, we won't be selecting a zone redundant SKU. If you want to le
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the **Overview** page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)]
these resources using the following steps:
## Next steps
-Once you have a VPN gateway, you can configure connections. The following articles will help you create a few of the most common configurations:
+Once you've created a VPN gateway, you can configure additional gateway settings and connections. The following articles help you create a few of the most common configurations:
> [!div class="nextstepaction"] > [Site-to-Site VPN connections](./tutorial-site-to-site-portal.md)
web-application-firewall Protect Api Hosted Apim By Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/protect-api-hosted-apim-by-waf.md
+
+ Title: Protect APIs hosted in APIM using Azure Web Application Firewall with Azure Front Door
+description: This article guides you through a process of creating an API in APIM and protects it from a web application attack using Azure Web Application Firewall integrated with Azure Front Door.
+++++ Last updated : 07/13/2023++
+# Protect APIs hosted on API Management using Azure Web Application Firewall
+
+There are a growing number of enterprises adhering to API-first approach for their internal applications, and the number and complexity of security attacks against web applications is constantly evolving. This situation requires enterprises to adopt a strong security strategy to protect APIs from various web application attacks.
+
+[Azure Web Application Firewall (WAF)](../overview.md) is an Azure Networking product that protects APIs from various of [OWASP top 10](https://owasp.org/www-project-top-ten/) web attacks, CVEΓÇÖs, and malicious bot attacks.
+
+This article describes how to use [Azure Web Application Firewall on Azure Front Door](afds-overview.md) to protect APIs hosted on [Azure API Management](../../api-management/api-management-key-concepts.md)
+
+## Create an APIM instance and publish an API in APIM that generates a mock API response
+
+1. Create an APIM instance
+
+ [Quickstart: Create a new Azure API Management service instance by using the Azure portal](../../api-management/get-started-create-service-instance.md)
+
+ The following screenshot shows that an APIM instance called **contoso-afd-apim-resource** has been created. It can take up to 30 to 40 minutes to create and activate an API Management service.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/contoso-main-page.png" alt-text="A screenshot showing the APIM instance created." lightbox="../media/protect-api-hosted-in-apim-by-waf/contoso-main-page.png":::
++
+2. Create an API and generate mock API responses
+
+ [Tutorial: Mock API responses](../../api-management/mock-api-responses.md#add-an-operation-to-the-test-api)
+
+ Replace the name of API from **Test API** given in the above tutorial with **Book API**.
+
+ The Book API does a GET operation for `_/test_` as the URL path for the API. You can see the response for the API is set as **200 OK** with content type as application/json with text as `{ΓÇ£BookΓÇ¥:ΓÇ¥ $100ΓÇ¥}`.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-get-test.png" alt-text="A screenshot showing the GET operation defined in APIM." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-get-test.png":::
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-200-ok.png" alt-text="A screenshot showing the mock response created." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-200-ok.png":::
++
+3. Deselect **Subscription required** check box under the API settings tab and select **Save**.
+
+4. Test the mock responses from the APIM interface. You should receive a **200 OK** response.
+
+Now, the Book API has been created. A successful call to this URL returns a **200 OK** response and returns the price of a book as $100.
++
+## Create an Azure Front Door Premium instance with APIM hosted API as the origin
+
+The Microsoft-managed Default Rule Set is based on the [OWASP Core Rule Set](https://github.com/coreruleset/coreruleset//) and includes Microsoft Threat Intelligence rules.
+
+> [!NOTE]
+> Managed Rule Set is not available for Azure Front Door Standard SKU. For more information about the different SKUs, see [Azure Front Door tier comparison](../../frontdoor/standard-premium/tier-comparison.md#feature-comparison-between-tiers).
++
+Use the steps described in quick create option to create an Azure Front Door Premium profile with an associated WAF security policy in the same resource group:
+
+[Quickstart: Create an Azure Front Door profile - Azure portal](../../frontdoor/create-front-door-portal.md)
+
+Use the following settings when creating the Azure Front Door profile:
+- Name: myAzureFrontDoor
+- Endpoint Name: bookfrontdoor
+- Origin type: API Management
+- Origin host name: contoso-afd-apim-resource.azure-api.net(contoso-afd-apim-resource)
+- WAF policy: Create a new WAF policy with name **bookwafpolicy**.
+
+All other settings remain at default values.
+
+## Enable Azure Web Application Firewall in prevention mode
+
+Select the "bookwafpolicy" Azure WAF policy and ensure the Policy mode is set to
+["Prevention" in the overview tab of the policy](waf-front-door-create-portal.md#change-mode)
+
+Azure WAF detection mode is used for testing and validating the policy. Detection doesn't block the call but logs all threats detected, while prevention mode blocks the call if an attack is detected. Typically, you test the scenario before switching to prevention model. For this exercise, we switch to prevention mode.
+[Azure Web Application Firewall on Azure Front Door](afds-overview.md#waf-modes) has more information about various WAF policy modes.
++
+## Restrict APIM access through the Azure Front Door only
+
+Requests routed through the Front Door include headers specific to your Front Door configuration. You can configure the
+[API Management policy reference](../../api-management/api-management-policies.md#access-restriction-policies) as an inbound APIM policy to filter incoming requests based on the unique value of the X-Azure-FDID HTTP request header that is sent to API Management. This header value is the Azure Front Door ID, which is available on the AFD Overview page.
+
+
+1. Copy the Front Door ID from the AFD overview page.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/afd-endpoint-fd-id.png" alt-text="A screenshot showing the AFD ID." lightbox="../media/protect-api-hosted-in-apim-by-waf/afd-endpoint-fd-id.png":::
++
+2. Access the APIM API page, select the Book API, select **Design** and **All operations**. In the Inbound policy, select **+ Add policy**.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-inbound-policy.png" alt-text="A screenshot showing how to add an inbound policy." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-inbound-policy.png":::
+
+3. Select Other policies
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-other-policies.png" alt-text="A screenshot showing other policies selected." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-other-policies.png":::
+
+4. Select ΓÇ£Show snippets" and select **Check HTTP header**.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-check-http-header.png" alt-text="A screenshot showing check header selected." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-check-http-header.png":::
+
+ Add the following code to the inbound policy for HTTP header `X-Azure-FDID`. Replace the `{FrontDoorId}` with the AFD ID copied in the first step of this section.
++
+ ```
+ <check-header name="X-Azure-FDID" failed-check-httpcode="403" failed-check-error-message="Invalid request" ignore-case="false">
+ <value>{FrontDoorId}</value>
+ </check-header>
+
+ ```
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/apim-final-check-header.png" alt-text="A screenshot showing the final policy configuration." lightbox="../media/protect-api-hosted-in-apim-by-waf/apim-final-check-header.png":::
+
+ Select **Save**.
+
+ At this point, APIM access is restricted to the Azure Front Door endpoint only.
+
+## Verify the API call is routed through Azure Front Door and protected by Azure Web Application Firewall
+
+1. Obtain the newly created Azure Front Door endpoint from the **Front Door Manager**.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/afd-get-endpoint.png" alt-text="A screenshot showing the AFD endpoint selected." lightbox="../media/protect-api-hosted-in-apim-by-waf/afd-get-endpoint.png":::
+
+2. Look at origin groups and confirm that the origin host name is __contoso-afd-apim-resource.azure-api.net.__ This step verifies that the APIM instance is an origin in the newly configured Azure Front Door premium.
+
+3. Under the **Security Policies** section, verify that the WAF policy **bookwafpolicy** is provisioned.
+
+4. Select **bookwafpolicy** and verify that the **bookwafpolicy** has Managed rules provisioned. The latest versions of Microsoft_DefaultRueSet and Microsoft_BotManagerRuleSet is provisioned which protects the origin against OWASP top 10 vulnerabilities and malicious bot attacks.
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/book-waf-policy.png" alt-text="A screenshot showing the WAF policy for managed rules." lightbox="../media/protect-api-hosted-in-apim-by-waf/book-waf-policy.png":::
+
+At this point, the end-to-end call is set up, and the API is protected by Azure Web Application Firewall.
+
+## Verify the setup
+
+1. Access the API through the Azure Front Door endpoint from your browser. The API should return the following response:
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/test-book-front-door.png" alt-text="A screenshot showing API access through AFD endpoint.":::
+
+2. Verify that APIM isn't accessible directly over the Internet and accessible only via the AFD:
+
+ :::image type="content" source="../media/protect-api-hosted-in-apim-by-waf/block-direct-access.png" alt-text="A screenshot showing APIM inaccessible through the Internet.":::
+
+3. Now try to invoke the AFD endpoint URL via any OWASP Top 10 attack or bot attack and you should receive `REQUEST IS BLOCKED` message and the request is blocked. The API has been protected from web attack by Azure Web Application Firewall.
+
+## Related content
+
+- [What is Azure Web Application Firewall?](../overview.md)
+- [Recommendations to mitigate OWASP API Security Top 10 threats using API Management](../../api-management/mitigate-owasp-api-threats.md)
+- [Configure Front Door Standard/Premium in front of Azure API Management](../../api-management/front-door-api-management.md)
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
CRS 3.0 includes 13 rule groups, as shown in the following table. Each group con
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled. > [!NOTE]
-> CRS 2.2.9 is no longer supported for new WAF policies. We recommend you upgrade to the latest CRS version.
+> CRS 2.2.9 is no longer supported for new WAF policies. We recommend you upgrade to the latest CRS version. CRS 2.2.9 can't be used along with CRS 3.2/DRS 2.1 and greater versions.
|Rule group|Description| |||
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group c
### Bot rules
-You can enable a managed bot protection rule set to take custom actions on requests from all bot categories.
+You can enable a managed bot protection rule set to take custom actions on requests from all bot categories.
|Rule group|Description| |||