Updates from: 08/31/2021 03:05:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-user-flow.md
The following template can be used to create a Conditional Access policy with di
Identity Protection can calculate what it believes is normal for a user's behavior and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. B2C tenants with P2 licenses can create Conditional Access policies incorporating user risk. When a user is detected as at risk, you can require that they securely change their password to remediate the risk and gain access to their account. We highly recommend setting up a user risk policy to require a secure password change so users can self-remediate.
-Learn more about [user risk in Identity Protection](../active-directory/identity-protection/concept-identity-protection-risks.md#user-risk), taking into account the [limitations on Identity Protection detections for B2C](identity-protection-investigate-risk.md#service-limitations-and-considerations).
+Learn more about [user risk in Identity Protection](../active-directory/identity-protection/concept-identity-protection-risks.md#user-linked-detections), taking into account the [limitations on Identity Protection detections for B2C](identity-protection-investigate-risk.md#service-limitations-and-considerations).
Configure Conditional Access through Azure portal or Microsoft Graph APIs to enable a user risk-based Conditional Access policy requiring multi-factor authentication (MFA) and password change when user risk is medium OR high.
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
Title: Configure authentication in a sample Angular spa application using Azure Active Directory B2C
-description: Using Azure Active Directory B2C to sign in and sign up users in an Angular SPA application.
+ Title: Configure authentication in a sample Angular SPA by using Azure Active Directory B2C
+description: Learn how to use Azure Active Directory B2C to sign in and sign up users in an Angular SPA.
-# Configure authentication in a sample Angular Single Page application using Azure Active Directory B2C
+# Configure authentication in a sample Angular single-page application by using Azure Active Directory B2C
-This article uses a sample Angular Single Page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your Angular apps.
+This article uses a sample Angular single-page application (SPA) to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your Angular apps.
## Overview
-OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign a user in to an application. This Angular sample uses [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser). MSAL is a Microsoft provided library that simplifies adding authentication and authorization support to Angular SPA apps.
+OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign in a user to an application. This Angular sample uses [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser). MSAL is a Microsoft-provided library that simplifies adding authentication and authorization support to Angular SPAs.
-### Sign in flow
+### Sign-in flow
-The sign-in flow involves following steps:
+The sign-in flow involves the following steps:
-1. The user navigates to the app and selects **Sign-in**.
-1. The app initiates an authentication request, and redirects the user to Azure AD B2C.
-1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md), [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md).
+1. The user opens the app and selects **Sign-in**.
+1. The app starts an authentication request and redirects the user to Azure AD B2C.
+1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md) and [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md).
1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app. The app takes the following actions:
- 1. Exchanges the authorization code for an ID token, access token and refresh token.
- 1. Reads the ID token claims.
- 1. Stores the access token and refresh token in an in-memory cache for later use. The access token allows the user to call protected resources, such as a web API. The refresh token is used to acquire a new access token.
+ 1. Exchanges the authorization code for an ID token, access token, and refresh token.
+ 1. Reads the ID token claims.
+ 1. Stores the access token and refresh token in an in-memory cache for later use. The access token allows the user to call protected resources, such as a web API. The refresh token is used to acquire a new access token.
-### App registration overview
+### App registration
-To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory.
+To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory:
-- The **Single page application** (Angular) registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. For example, **App ID: 1**.
+- The *single-page application* (Angular) registration enables your app to sign in with Azure AD B2C. During app registration, you specify the *redirect URI*. The redirect URI is the endpoint to which the user is redirected after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. This article uses the example **App ID: 1**.
-- The **web API** registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID* that uniquely identifies your web API. For example, **App ID: 2**. Grant your app (App ID: 1) permissions to the web API scopes (App ID: 2).
+- The *web API* registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an application ID that uniquely identifies your web API. This article uses the example **App ID: 2**. Grant your app (**App ID: 1**) permissions to the web API scopes (**App ID: 2**).
-The following diagrams describe the app registrations and the application architecture.
+The following diagram describes the app registrations and the app architecture.
-![Diagram describes a SPA app with web API, registrations and tokens.](./media/configure-authentication-sample-angular-spa-app/spa-app-with-api-architecture.png)
+![Diagram that describes a single-page application with web A P I, registrations, and tokens.](./media/configure-authentication-sample-angular-spa-app/spa-app-with-api-architecture.png)
### Call to a web API [!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
-### Sign out flow
+### Sign-out flow
[!INCLUDE [active-directory-b2c-app-integration-sign-out-flow](../../includes/active-directory-b2c-app-integration-sign-out-flow.md)] ## Prerequisites
-A computer that's running:
+Before you follow the procedures in this article, make sure that your computer is running:
-* [Visual Studio Code](https://code.visualstudio.com/), or another code editor
-* [Node.js runtime](https://nodejs.org/en/download/) and [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
-* [Angular LCI](https://angular.io/cli)
+* [Visual Studio Code](https://code.visualstudio.com/) or another code editor.
+* [Node.js runtime](https://nodejs.org/en/download/) and [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
+* [Angular LCI](https://angular.io/cli).
## Step 1: Configure your user flow
A computer that's running:
## Step 2: Register your Angular SPA and API
-In this step, you create the Angular SPA app and the web API application registrations, and specify the scopes of your web API.
+In this step, you create the registrations for the Angular SPA and the web API app. You also specify the scopes of your web API.
### 2.1 Register the web API application
In this step, you create the Angular SPA app and the web API application registr
Follow these steps to create the Angular app registration: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directory + Subscription** icon on the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**.
-1. Enter a **Name** for the application. For example, *MyApp*.
+1. For **Name**, enter a name for the application. For example, enter **MyApp**.
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
-1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `http://localhost:4200` in the URL text box.
-1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** check box.
+1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `http://localhost:4200` in the URL box.
+1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox.
1. Select **Register**.
-1. Record the **Application (client) ID** for use in a later step when you configure the web application.
- ![Screenshot showing how to get the Angular application ID.](./media/configure-authentication-sample-angular-spa-app/get-azure-ad-b2c-app-id.png)
+1. Record the **Application (client) ID** value for use in a later step when you configure the web application.
+ ![Screenshot that shows how to get the Angular application I D.](./media/configure-authentication-sample-angular-spa-app/get-azure-ad-b2c-app-id.png)
### 2.5 Grant permissions
Follow these steps to create the Angular app registration:
## Step 3: Get the Angular sample code
-This sample demonstrates how an Angular single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API. Download the sample below:
+This sample demonstrates how an Angular single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API.
- [Download a zip file](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/archive/refs/heads/main.zip) or clone the sample from the [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/):
+ [Download a .zip file](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/archive/refs/heads/main.zip) of the sample, or clone the sample from the [GitHub repository](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/) by using the following command:
``` git clone https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial.git
This sample demonstrates how an Angular single-page application can use Azure AD
### 3.1 Configure the Angular sample
-Now that you've obtained the SPA app sample, update the code with your Azure AD B2C and web API values. In the sample folder, under the `src/app` folder, open the `auth-config.ts` file, and update with keys the corresponding values:
+Now that you've obtained the SPA sample, update the code with your Azure AD B2C and web API values. In the sample folder, under the *src/app* folder, open the *auth-config.ts* file. Update the keys with the corresponding values:
|Section |Key |Value | ||||
-| b2cPolicies | names |The user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). |
-| b2cPolicies | authorities | Replace `your-tenant-name` with your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. Then, replace the policy name with the user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). For example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. |
-| b2cPolicies | authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. |
+| b2cPolicies | names |The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). |
+| b2cPolicies | authorities | Replace `your-tenant-name` with your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, use `contoso.onmicrosoft.com`. Then, replace the policy name with the user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). For example: `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`. |
+| b2cPolicies | authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.onmicrosoft.com`. |
| Configuration | clientId | The Angular application ID from [step 2.3](#23-register-the-angular-app). |
-| protectedResources| endpoint| The URL of the web API, `http://localhost:5000/api/todolist`. |
-| protectedResources| scopes| The web API scopes you created in [step 2.2](#22-configure-scopes). For example, `b2cScopes: ["https://<your-tenant-namee>.onmicrosoft.com/tasks-api/tasks.read"]`. |
+| protectedResources| endpoint| The URL of the web API: `http://localhost:5000/api/todolist`. |
+| protectedResources| scopes| The web API scopes that you created in [step 2.2](#22-configure-scopes). For example: `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`. |
-Your resulting *src/app/auth-config.ts* code should look similar to following sample:
+Your resulting *src/app/auth-config.ts* code should look similar to the following sample:
```typescript export const b2cPolicies = {
export const protectedResources = {
## Step 4: Get the web API sample code
-Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Download the sample below:
+Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant.
-[Download a \*.zip archive](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi/archive/master.zip), or clone the sample web API project from GitHub. You can also browse directly to the [Azure-Samples/active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) project on GitHub.
+[Download a \*.zip archive](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi/archive/master.zip), or clone the sample web API project from GitHub. You can also browse directly to the [Azure-Samples/active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) project on GitHub by using the following command:
```console git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi.git
git clone https://github.com/Azure-Samples/active-directory-b2c-javascript-nodej
### 4.1 Configure the web API
-In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token the web app passes as a bearer token. Update the following properties of the app settings:
+In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
|Section |Key |Value | ||||
-|credentials|tenantName| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso`.|
-|credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*.|
-|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace the `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace the `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
-|policies|policyName|The user flow or custom policy you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, the sign-up or sign-in user flow.|
-| resource| scope | The scopes of your web API application registration from step [2.5])(#25-grant-permissions). |
+|credentials|tenantName| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso`.|
+|credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [earlier diagram](#app-registration), it's the application with **App ID: 2**.|
+|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
+|policies|policyName|The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, use the sign-up or sign-in user flow.|
+| resource| scope | The scopes of your web API application registration from [step 2.5](#25-grant-permissions). |
Your final configuration file should look like the following JSON:
Your final configuration file should look like the following JSON:
## Step 5: Run the Angular SPA and web API
-You're now ready to test the Angular's scoped access to the API. In this step, run both the web API and the sample Angular application on your local machine. Then, sign in to the Angular application, and select the **TodoList** button to start a request to the protected API.
+You're now ready to test the Angular scoped access to the API. In this step, run both the web API and the sample Angular application on your local machine. Then, log in to the Angular application, and select the **TodoList** button to start a request to the protected API.
### Run the web API
-1. Open a console window and change to the directory containing the web API sample. For example:
+1. Open a console window and change to the directory that contains the web API sample. For example:
```console cd active-directory-b2c-javascript-nodejs-webapi
You're now ready to test the Angular's scoped access to the API. In this step, r
node index.js ```
- The console window displays the port number where the application is hosted.
+ The console window displays the port number where the application is hosted:
```console Listening on port 5000...
You're now ready to test the Angular's scoped access to the API. In this step, r
### Run the Angular application
-1. Open another console window and change to the directory containing the Angular sample. For example:
+1. Open another console window and change to the directory that contains the Angular sample. For example:
```console cd ms-identity-javascript-angular-tutorial-main/3-Authorization-II/2-call-api-b2c/SPA
You're now ready to test the Angular's scoped access to the API. In this step, r
npm start ```
- The console window displays the port number of where the application is hosted.
+ The console window displays the port number of where the application is hosted:
```console Listening on port 4200... ```
-1. Navigate to `http://localhost:4200` in your browser to view the application.
+1. Go to `http://localhost:4200` in your browser to view the application.
1. Select **Login**.
- ![Screenshot showing the Angular sample app with the login link.](./media/configure-authentication-sample-angular-spa-app/sample-app-sign-in.png)
+ ![Screenshot that shows the Angular sample app with the login link.](./media/configure-authentication-sample-angular-spa-app/sample-app-sign-in.png)
-1. Complete the sign-up or sign-in process.
-1. Upon successful login, you should see your profile. From the menu, select **ToDoList**.
+1. Complete the sign-up or login process.
+1. Upon successful login, you should see your profile. From the menu, select **TodoList**.
- ![Screenshot showing the Angular sample app with the user profile, and the call to the to do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-result.png)
+ ![Screenshot that shows the Angular sample app with the user profile, and the call to the to-do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-result.png)
-1. **Add** new items to the list, **delete**, or **edit** items.
+1. Select **Add** to add new items to the list, or use the icons to delete or edit items.
- ![Screenshot showing the Angular sample app's call to the to do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-calls-web-api.png)
+ ![Screenshot that shows the Angular sample app's call to the to-do list.](./media/configure-authentication-sample-angular-spa-app/sample-app-calls-web-api.png)
## Deploy your application
-In a production application, the app registration redirect URI is typically a publicly accessible endpoint where your app is running, like `https://contoso.com`.
+In a production application, the redirect URI for the app registration is typically a publicly accessible endpoint where your app is running, like `https://contoso.com`.
You can add and modify redirect URIs in your registered applications at any time. The following restrictions apply to redirect URIs:
You can add and modify redirect URIs in your registered applications at any time
## Next steps
-* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/)
+* [Learn more about the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/)
* [Enable authentication in your own Angular application](enable-authentication-angular-spa-app.md)
-* Configure [authentication options in your Angular application](enable-authentication-angular-spa-app-options.md)
+* [Configure authentication options in your Angular application](enable-authentication-angular-spa-app-options.md)
* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
The following table summarizes the Security Assertion Markup Language (SAML) app
|Feature |User flow |Custom policy |Notes | ||::|::|| [SP initiated](saml-service-provider.md) | NA | GA | POST and Redirect bindings. |
-[IDP initiated](saml-service-provider-options.md#identity-provider-initiated-flow) | NA | GA | Where the initiating identity provider is Azure AD B2C. |
+[IDP initiated](saml-service-provider-options.md#configure-idp-initiated-flow) | NA | GA | Where the initiating identity provider is Azure AD B2C. |
## User experience customization
The following table summarizes the Security Assertion Markup Language (SAML) app
|[OAuth2](oauth2-technical-profile.md) | NA | GA | For example, [Google](identity-provider-google.md), [GitHub](identity-provider-github.md), and [Facebook](identity-provider-facebook.md).| |[OAuth1](oauth1-technical-profile.md) | NA | GA | For example, [Twitter](identity-provider-twitter.md). | |[OpenID Connect](openid-connect-technical-profile.md) | GA | GA | For example, [Azure AD](identity-provider-azure-ad-single-tenant.md). |
-|[SAML2](identity-provider-generic-saml.md) | NA | GA | For example, [Salesforce](identity-provider-salesforce-saml.md) and [AD-FS].(identity-provider-adfs.md) |
+|[SAML2](identity-provider-generic-saml.md) | NA | GA | For example, [Salesforce](identity-provider-salesforce-saml.md) and [AD-FS](identity-provider-adfs.md). |
| WSFED | NA | NA | | ### API connectors
Developers consuming the custom policy feature set should adhere to the followin
## Next steps -- Check the [Microsoft Graph operations available for Azure AD B2C](microsoft-graph-operations.md)
+- Check the [Microsoft Graph operations available for Azure AD B2C](microsoft-graph-operations.md).
- Learn more about [custom policies and the differences with user flows](custom-policy-overview.md).
active-directory-b2c Enable Authentication Angular Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md
Title: Enable Angular application options using Azure Active Directory B2C
-description: Enable the use of Angular application options by using several ways.
+ Title: Enable Angular application options by using Azure Active Directory B2C
+description: Enable the use of Angular application options in several ways.
-# Configure authentication options in an Angular application using Azure Active Directory B2C
+# Configure authentication options in an Angular application by using Azure Active Directory B2C
-This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Angular application. Before you start, familiarize yourself with the following article: [Configure authentication in an Angular SPA application](configure-authentication-sample-angular-spa-app.md), or [Enable authentication in your own Angular SPA application](enable-authentication-angular-spa-app.md).
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your Angular single-page application (SPA). Before you start, familiarize yourself with the article [Configure authentication in an Angular SPA](configure-authentication-sample-angular-spa-app.md) or [Enable authentication in your own Angular SPA](enable-authentication-angular-spa-app.md).
-## Single-page application sign-in and sign-out behavior
+## Sign-in and sign-out behavior
-You can configure your single page application to sign in users with MSAL.js in two ways:
+You can configure your single-page application to sign in users with MSAL.js in two ways:
-- **Pop-up window** - The authentication happens in a pop-up window, the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. Note, there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
- - To sign in with popup windows, in the *src/app/app.component.ts* class, use the `loginPopup` method.
- - In the *src/app/app.module.ts* class, set the `interactionType` attribute to `InteractionType.Popup`.
- - To sign out with popup windows, in the *src/app/app.component.ts* class, use the `logoutPopup` method. You can also configure `logoutPopup` to redirect the main window to a different page, such as the home page or sign-in page, after logout is complete by passing `mainWindowRedirectUri` as part of the request.
-- **Redirect** - The user is redirected to Azure AD B2C to complete the authentication flow. Use this approach if users have browser constraints or policies where pop-up windows are disabled.
- - To sign-in with redirection, in the *src/app/app.component.ts* class, use the `loginRedirect` method.
- - In the *src/app/app.module.ts* class, set the `interactionType` attribute to `InteractionType.Redirect`.
- - To sign out with redirection, in the *src/app/app.component.ts* class, use the `logoutRedirect` method. Configure the URI to which it should redirect after sign-out by setting `postLogoutRedirectUri`. This URI should be registered as a redirect Uri in your application registration.
+- **Pop-up window**: The authentication happens in a pop-up window, and the state of the application is preserved. Use this approach if you don't want users to move away from your application page during authentication. Note that there are [known issues with pop-up windows on Internet Explorer](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups).
+ - To sign in with pop-up windows, in the `src/app/app.component.ts` class, use the `loginPopup` method.
+ - In the `src/app/app.module.ts` class, set the `interactionType` attribute to `InteractionType.Popup`.
+ - To sign out with pop-up windows, in the `src/app/app.component.ts` class, use the `logoutPopup` method. You can also configure `logoutPopup` to redirect the main window to a different page, such as the home page or sign-in page, after sign-out is complete by passing `mainWindowRedirectUri` as part of the request.
+- **Redirect**: The user is redirected to Azure AD B2C to complete the authentication flow. Use this approach if users have browser constraints or policies where pop-up windows are disabled.
+ - To sign in with redirection, in the `src/app/app.component.ts` class, use the `loginRedirect` method.
+ - In the `src/app/app.module.ts` class, set the `interactionType` attribute to `InteractionType.Redirect`.
+ - To sign out with redirection, in the `src/app/app.component.ts` class, use the `logoutRedirect` method. Configure the URI to which it should redirect after sign-out by setting `postLogoutRedirectUri`. This URI should be registered as a redirect URI in your application registration.
The following sample demonstrates how to sign in and sign out:
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript
logout() {
-The MSAL Angular library has three sign-in flows: interactive sign-in (where a user selects the sign-in button), MSAL Guard, and MSAL Interceptor. The MSAL Guard and MSAL Interceptor configurations take effect when a user tries to access a protected resource without a valid access token. In such cases, the MSAL library forces the user to sign in. The following samples demonstrate how to configure MSAL Guard and MSAL Interceptor for sign-in with a pop-up window or redirection.
+The MSAL Angular library has three sign-in flows: interactive sign-in (where a user selects the sign-in button), MSAL Guard, and MSAL Interceptor. The MSAL Guard and MSAL Interceptor configurations take effect when a user tries to access a protected resource without a valid access token. In such cases, the MSAL library forces the user to sign in.
-#### [Popup](#tab/popup)
+The following samples demonstrate how to configure MSAL Guard and MSAL Interceptor for sign-in with a pop-up window or redirection:
+
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.module.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
1. If you use a custom policy, add the required input claim as described in [Set up direct sign-in](direct-signin.md#prepopulate-the-sign-in-name). 1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
-1. Set the `loginHint` attribute with the corresponding login hint. For example: bob@contoso.com.
+1. Set the `loginHint` attribute with the corresponding sign-in hint.
-The following code snippets demonstrate how to pass the login hint parameter:
+The following code snippets demonstrate how to pass the sign-in hint parameter. They use `bob@contoso.com` as the attribute value.
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.component.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
1. Check the domain name of your external identity provider. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). 1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object.
-1. Set the `domainHint` attribute with the corresponding domain hint. For example: facebook.com.
+1. Set the `domainHint` attribute with the corresponding domain hint.
-The following code snippets demonstrate how to pass the domain hint parameter:
+The following code snippets demonstrate how to pass the domain hint parameter. They use `facebook.com` as the attribute value.
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.component.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
1. [Configure Language customization](language-customization.md). 1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
-1. Add the `ui_locales` parameter with the corresponding language code to the `extraQueryParameters` attributes. For example, `es-es`.
+1. Add the `ui_locales` parameter with the corresponding language code to the `extraQueryParameters` attributes.
-The following code snippets demonstrate how to pass the domain hint parameter:
+The following code snippets demonstrate how to pass the domain hint parameter. They use `es-es` as the attribute value.
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.component.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
1. Configure the [ContentDefinitionParameters](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) element. 1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes.
-1. Add the custom query string parameter, such as `campaignId`. Set the parameter value. For example, `germany-promotion`.
+1. Add the custom query string parameter, such as `campaignId`. Set the parameter value.
-The following code snippets demonstrate how to pass a custom query string parameter:
+The following code snippets demonstrate how to pass a custom query string parameter. They use `germany-promotion` as the attribute value.
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.component.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
[!INCLUDE [active-directory-b2c-app-integration-id-token-hint](../../includes/active-directory-b2c-app-integration-id-token-hint.md)]
-1. In your custom policy, define an [ID token hint technical profile](id-token-hint.md).
+1. In your custom policy, define the [technical profile of an ID token hint](id-token-hint.md).
1. Create or use an existing `PopupRequest` or `RedirectRequest` MSAL configuration object with `extraQueryParameters` attributes. 1. Add the `id_token_hint` parameter with the corresponding variable that stores the ID token.
-The following code snippets demonstrate how to an ID token hint:
+The following code snippets demonstrate how to define an ID token hint:
-#### [Popup](#tab/popup)
+#### [Pop-up](#tab/popup)
```typescript // src/app/app.component.ts
MsalModule.forRoot(new PublicClientApplication(msalConfig),
[!INCLUDE [active-directory-b2c-app-integration-custom-domain](../../includes/active-directory-b2c-app-integration-custom-domain.md)]
-To use your custom domain your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the *src/app/auth-config.ts* MSAL configuration object and change the **authorities** and **knownAuthorities** to use your custom domain name and tenant ID.
+To use your custom domain for your tenant ID in the authentication URL, follow the guidance in [Enable custom domains](custom-domain.md). Open the `src/app/auth-config.ts` MSAL configuration object and change `authorities` and `knownAuthorities` to use your custom domain name and tenant ID.
The following JavaScript shows the MSAL configuration object before the change:
const msalConfig = {
[!INCLUDE [active-directory-b2c-app-integration-logging](../../includes/active-directory-b2c-app-integration-logging.md)]
-To configure Angular [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in the *src/app/auth-config.ts* configure the following keys:
+To configure Angular [logging](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/logging.md), in *src/app/auth-config.ts*, configure the following keys:
- `loggerCallback` is the logger callback function. -- `logLevel` lets you specify the level of logging you want. Possible values: `Error`, `Warning`, `Info`, and `Verbose`.-- `piiLoggingEnabled` enables the input of personal data. Possible values: `true`, or `false`.
+- `logLevel` lets you specify the level of logging. Possible values: `Error`, `Warning`, `Info`, and `Verbose`.
+- `piiLoggingEnabled` enables the input of personal data. Possible values: `true` or `false`.
The following code snippet demonstrates how to configure MSAL logging:
export const msalConfig: Configuration = {
## Next steps -- Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md)
+- Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md).
active-directory-b2c Enable Authentication Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-angular-spa-app.md
Title: Enable authentication in an Angular application using Azure Active Directory B2C building blocks
-description: The building blocks of Azure Active Directory B2C to sign in and sign up users in an Angular application.
+ Title: Enable authentication in an Angular application by using Azure Active Directory B2C building blocks
+description: Use the building blocks of Azure Active Directory B2C to sign in and sign up users in an Angular application.
-# Enable authentication in your own Angular Application using Azure Active Directory B2C
+# Enable authentication in your own Angular Application by using Azure Active Directory B2C
-This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own Angular Single Page Application (SPA). Learn how to integrate an Angular application with [MSAL for Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) authentication library.
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own Angular single-page application (SPA). Learn how to integrate an Angular application with the [MSAL for Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) authentication library.
-Use this article with [Configure authentication in a sample Angular SPA application](./configure-authentication-sample-angular-spa-app.md), substituting the sample Angular app with your own Angular app. After completing the steps in this article, your application will accept sign-ins via Azure AD B2C.
+Use this article with the related article titled [Configure authentication in a sample Angular single-page application](./configure-authentication-sample-angular-spa-app.md). Substitute the sample Angular app with your own Angular app. After you complete the steps in this article, your application will accept sign-ins via Azure AD B2C.
## Prerequisites
-Review the prerequisites and integration steps in [Configure authentication in a sample Angular SPA application](configure-authentication-sample-angular-spa-app.md) article.
+Review the prerequisites and integration steps in the [Configure authentication in a sample Angular single-page application](configure-authentication-sample-angular-spa-app.md) article.
## Create an Angular app project
-You can use an existing Angular app project, or create a new one. To create a new project, run the following commands.
+You can use an existing Angular app project or create a new one. To create a new project, run the following commands.
-The following commands:
+The commands:
-1. Install the [Angular CLI](https://angular.io/cli) using the npm package manager.
-1. [Creates an Angular workspace](https://angular.io/cli/new) with routing module. The app name is `msal-angular-tutorial`, you can change it to any valid angular app name, such as `contoso-car-service`.
+1. Install the [Angular CLI](https://angular.io/cli) by using the npm package manager.
+1. [Create an Angular workspace](https://angular.io/cli/new) with a routing module. The app name is `msal-angular-tutorial`. You can change it to any valid Angular app name, such as `contoso-car-service`.
1. Change to the app directory folder. ```
cd msal-angular-tutorial
## Install the dependencies
-To install the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-browser) and [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) libraries in your application, in your command shell run the following commands:
+To install the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-browser) and [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/master/lib/msal-angular) libraries in your application, run the following command in your command shell:
``` npm install @azure/msal-browser @azure/msal-angular ```
-Install the [Angular Material component library](https://material.angular.io/) (optional, for UI).
+Install the [Angular Material component library](https://material.angular.io/) (optional, for UI):
``` npm install @angular/material @angular/cdk
npm install @angular/material @angular/cdk
## Add the authentication components
-The sample code is made up of the following components:
+The sample code consists of the following components:
|Component |Type |Description | ||||
-| auth-config.ts| Constants | A configuration file that contains information about your Azure AD B2C identity provider and the web API service. The Angular app uses this information to establish a trust relationship with Azure AD B2C, sign the user in and out, acquire tokens, and validate them. |
-| app.module.ts| [Angular module](https://angular.io/guide/architecture-modules)| Describes how the application parts fit together. This is the root module that is used to bootstrap and launch the application. In this walkthrough, you add some components to the *app.module.ts* module, and initiate the MSAL library with the MSAL config object. |
-| app-routing.module.ts | [Angular routing module](https://angular.io/tutorial/toh-pt5) | Enables navigation by interpreting a browser URL and loading the corresponding component. In this walkthrough, you add some components to the routing module, and protect components with [MSAL guard](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-guard.md). Only authorized users can access the protected components. |
-| app.component.* | [Angular component](https://angular.io/guide/architecture-components) | The `ng new` command created an Angular project with a root component. In this walkthrough, you change the app component to host the top navigation bar. The navigation bar contains various buttons, including sign-in and sign-out. The *app.component.ts* class handles the sign-in and sign-out events. |
-| home.component.* | [Angular component](https://angular.io/guide/architecture-components)|In this walkthrough, you add the *home* component to render the anonymous access home page. This component demonstrates how to check whether a user has signed in. |
+| auth-config.ts| Constants | This configuration file contains information about your Azure AD B2C identity provider and the web API service. The Angular app uses this information to establish a trust relationship with Azure AD B2C, sign in and sign out the user, acquire tokens, and validate the tokens. |
+| app.module.ts| [Angular module](https://angular.io/guide/architecture-modules)| This component describes how the application parts fit together. This is the root module that's used to bootstrap and open the application. In this walkthrough, you add some components to the *app.module.ts* module, and you start the MSAL library with the MSAL configuration object. |
+| app-routing.module.ts | [Angular routing module](https://angular.io/tutorial/toh-pt5) | This component enables navigation by interpreting a browser URL and loading the corresponding component. In this walkthrough, you add some components to the routing module, and you protect components with [MSAL Guard](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-guard.md). Only authorized users can access the protected components. |
+| app.component.* | [Angular component](https://angular.io/guide/architecture-components) | The `ng new` command created an Angular project with a root component. In this walkthrough, you change the *app* component to host the top navigation bar. The navigation bar contains various buttons, including sign-in and sign-out buttons. The `app.component.ts` class handles the sign-in and sign-out events. |
+| home.component.* | [Angular component](https://angular.io/guide/architecture-components)|In this walkthrough, you add the *home* component to render the home page for anonymous access. This component demonstrates how to check whether a user has signed in. |
| profile.component.* | [Angular component](https://angular.io/guide/architecture-components) | In this walkthrough, you add the *profile* component to learn how to read the ID token claims. | | webapi.component.* | [Angular component](https://angular.io/guide/architecture-components)| In this walkthrough, you add the *webapi* component to learn how to call a web API. | -- To add the following components to your app, run the following Angular CLI commands. The `generate component` commands:
-1. Creates a folder for each component. The folder contains the TypeScript, HTML, CSS, and test files.
-1. Updates the `app.module.ts` and the `app-routing.module.ts` files with references to the new components.
+1. Create a folder for each component. The folder contains the TypeScript, HTML, CSS, and test files.
+1. Update the `app.module.ts` and `app-routing.module.ts` files with references to the new components.
``` ng generate component home
ng generate component webapi
## Add the app settings
-Azure AD B2C identity provider and web API settings are stored in the `auth-config.ts` file. In your *src/app* folder, create a file named *auth-config.ts* containing the following code. Then change the settings as described in the [3.1 Configure the Angular sample](configure-authentication-sample-angular-spa-app.md#31-configure-the-angular-sample).
+Settings for the Azure AD B2C identity provider and the web API are stored in the *auth-config.ts* file. In your *src/app* folder, create a file named *auth-config.ts* that contains the following code. Then change the settings as described in [3.1 Configure the Angular sample](configure-authentication-sample-angular-spa-app.md#31-configure-the-angular-sample).
```typescript import { LogLevel, Configuration, BrowserCacheLocation } from '@azure/msal-browser';
export const loginRequest = {
}; ```
-## Initiate the authentication libraries
+## Start the authentication libraries
-Public client applications are not trusted to safely keep application secrets and therefore don't have client secrets. In the *src/app* folder, open the *app.module.ts*, and make the following changes:
+Public client applications are not trusted to safely keep application secrets, so they don't have client secrets. In the *src/app* folder, open *app.module.ts* and make the following changes:
-1. Import MSAL and MSAL browser libraries.
+1. Import the MSAL Angular and MSAL Browser libraries.
1. Import the Azure AD B2C configuration module.
-1. Import the `HttpClientModule`. The HTTP client is used to call web APIs.
+1. Import `HttpClientModule`. The HTTP client is used to call web APIs.
1. Import the Angular HTTP interceptor. MSAL uses the interceptor to inject the bearer token to the HTTP authorization header. 1. Add the essential Angular materials.
-1. Instantiate MSAL using the multiple account public client application object. The MSAL initialization includes passing:
- 1. The *auth-config.ts* configuration object.
- 1. The routing guard configuration object.
- 1. The MSAL interceptor configuration object. The interceptor class automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) to known protected resources.
-1. Configure the `HTTP_INTERCEPTORS`, and `MsalGuard` [Angular providers](https://angular.io/guide/providers).
-1. Add the `MsalRedirectComponent` to the [Angular bootstrap](https://angular.io/guide/bootstrapping).
+1. Instantiate MSAL by using the multiple account public client application object. The MSAL initialization includes passing:
+ 1. The configuration object for *auth-config.ts*.
+ 1. The configuration object for the routing guard.
+ 1. The configuration object for the MSAL interceptor. The interceptor class automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) class to known protected resources.
+1. Configure the `HTTP_INTERCEPTORS` and `MsalGuard` [Angular providers](https://angular.io/guide/providers).
+1. Add `MsalRedirectComponent` to the [Angular bootstrap](https://angular.io/guide/bootstrapping).
-In the *src/app* folder, edit *app.module.ts* and make the following modifications shown in the code snippet below. The changes are flagged with *Changes start here*, and *Changes end here*. After the changes, your code should look like the following code snippet.
+In the *src/app* folder, edit *app.module.ts* and make the modifications shown in the following code snippet. The changes are flagged with "Changes start here" and "Changes end here."
```typescript import { NgModule } from '@angular/core';
import { MatTableModule } from '@angular/material/table';
// Import the HTTP client. HttpClientModule,
- // Initiate the MSAL library with the MSAL config object
+ // Initiate the MSAL library with the MSAL configuration object
MsalModule.forRoot(new PublicClientApplication(msalConfig), { // The routing guard configuration.
export class AppModule { }
## Configure routes
-In this section, configure the routes for your Angular application. When a user selects a link on the page to navigate within your single-page application, or types a URL in the address bar, the routes map the URL to an Angular component. The Angular routing [canActivate](https://angular.io/api/router/CanActivate) interface uses the MSAL Guard to checks if user is signed-in. If the user isn't signed-in, MSAL takes the user to Azure AD B2C to authenticate.
-
-In the *src/app* folder, edit *app-routing.module.ts* make the following modifications shown in the code snippet below. The changes are flagged with *Changes start here*, and *Changes end here*.
+In this section, configure the routes for your Angular application. When a user selects a link on the page to move within your single-page application, or enters a URL in the address bar, the routes map the URL to an Angular component. The Angular routing [canActivate](https://angular.io/api/router/CanActivate) interface uses MSAL Guard to check if the user is signed in. If the user isn't signed in, MSAL takes the user to Azure AD B2C to authenticate.
-After the changes, your code should look like the following code snippet.
+In the *src/app* folder, edit *app-routing.module.ts* make the modifications shown in the following code snippet. The changes are flagged with "Changes start here" and "Changes end here."
```typescript import { NgModule } from '@angular/core';
const routes: Routes = [
{ path: 'profile', component: ProfileComponent,
- // The profile component is protected with MSAL guard.
+ // The profile component is protected with MSAL Guard.
canActivate: [MsalGuard] }, { path: 'webapi', component: WebapiComponent,
- // The profile component is protected with MSAL guard.
+ // The profile component is protected with MSAL Guard.
canActivate: [MsalGuard] }, {
export class AppRoutingModule { }
## Add the sign-in and sign-out buttons
-In this section, you add the sign-in and sign-out buttons the *app* component. In the *src/app* folder, open the *app.component.ts*, and make the following changes:
+In this section, you add the sign-in and sign-out buttons to the *app* component. In the *src/app* folder, open the *app.component.ts* file and make the following changes:
1. Import the required components.
-1. Change the class to implement [OnInit method](https://angular.io/api/core/OnInit). The `OnInit` method subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. Use this event to know the status of user interactions, particularly to check that interactions are completed. Before interacting with MSAL account object, check the `InteractionStatus` property returns `InteractionStatus.None`. The `subscribe` event calls the `setLoginDisplay` method to check if the user is authenticated.
+1. Change the class to implement the [OnInit method](https://angular.io/api/core/OnInit). The `OnInit` method subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. Use this event to know the status of user interactions, particularly to check that interactions are completed.
+
+ Before interactions with the MSAL account object, check that the `InteractionStatus` property returns `InteractionStatus.None`. The `subscribe` event calls the `setLoginDisplay` method to check if the user is authenticated.
1. Add class variables.
-1. Add the `login` method that initiates authorization flow.
+1. Add the `login` method that starts authorization flow.
1. Add the `logout` method that signs out the user. 1. Add the `setLoginDisplay` method that checks if the user is authenticated. 1. Add the [ngOnDestroy](https://angular.io/api/core/OnDestroy) method to clean up the `inProgress$` subscribe event.
export class AppComponent implements OnInit{
} ```
-In the *src/app* folder, edit *app.component.html*, and make the following changes:
+In the *src/app* folder, edit *app.component.html* and make the following changes:
1. Add a link to the profile and web API components.
-1. Add the login button with click event attribute set to the `login()` method. This button appears only if `loginDisplay` class variable is `false`.
-1. Add the logout button with click event attribute set to the `logout()` method. This button appears only if `loginDisplay` class variable is `true`.
+1. Add the login button with the click event attribute set to the `login()` method. This button appears only if the `loginDisplay` class variable is `false`.
+1. Add the logout button with the click event attribute set to the `logout()` method. This button appears only if the `loginDisplay` class variable is `true`.
1. Add a [router-outlet](https://angular.io/api/router/RouterOutlet) element.
-After the changes, your code should look like the following code snippet.
+After the changes, your code should look like the following code snippet:
```html <mat-toolbar color="primary">
After the changes, your code should look like the following code snippet.
</div> ```
-Optionally, update the *app.component.css* file with the following CSS snippet.
+Optionally, update the *app.component.css* file with the following CSS snippet:
```css .toolbar-spacer {
Optionally, update the *app.component.css* file with the following CSS snippet.
## Handle the app redirects
-When using redirects with MSAL, it is mandatory to add the [app-redirect](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/redirects.md) directive to the *https://docsupdatetracker.net/index.html*. In the *src* folder, edit *https://docsupdatetracker.net/index.html*.
-
-After the changes, your code should look like the following code snippet.
+When you're using redirects with MSAL, you must add the [app-redirect](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/redirects.md) directive to *https://docsupdatetracker.net/index.html*. In the *src* folder, edit *https://docsupdatetracker.net/index.html* as shown in the following code snippet:
```html <!doctype html>
After the changes, your code should look like the following code snippet.
</html> ```
-## Set app CSS (Optional)
+## Set app CSS (optional)
-In the */src* folder, update the *styles.css* file with the following CSS snippet.
+In the */src* folder, update the *styles.css* file with the following CSS snippet:
```css @import '~@angular/material/prebuilt-themes/deeppurple-amber.css';
body { margin: 0; font-family: Roboto, "Helvetica Neue", sans-serif; }
``` > [!TIP]
-> At this point you can run your app and test the sign-in experience. To run your application, see the [Run the Angular application](#run-the-angular-application) section.
+> At this point, you can run your app and test the sign-in experience. To run your app, see the [Run the Angular application](#run-the-angular-application) section.
## Check if a user is authenticated
-The `home.component` demonstrates how to check the user is authenticated. In the *src/app/home* folder, update the *home.component.ts* with the following code snippet.
-
+The *home.component* file demonstrates how to check if the user is authenticated. In the *src/app/home* folder, update *home.component.ts* with the following code snippet.
The code: 1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `msalSubject$` and `inProgress$` observable events.
-1. The `msalSubject$` writes the authentication result to the browser console.
-1. The `inProgress$` checks if a user is authenticated. The `getAllAccounts()` returns one, or more objects.
+1. Ensures that the `msalSubject$` event writes the authentication result to the browser console.
+1. Ensures that the `inProgress$` event checks if a user is authenticated. The `getAllAccounts()` method returns one or more objects.
```typescript
In the *src/app/home* folder, update *home.component.html* with the following HT
## Read the ID token claims
-The `profile.component` demonstrates how to access the user's ID token claims. In the *src/app/profile* folder, update the *profile.component.ts* with the following code snippet.
+The *profile.component* file demonstrates how to access the user's ID token claims. In the *src/app/profile* folder, update *profile.component.ts* with the following code snippet.
The code: 1. Imports the required components.
-1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. The event loads the account, and reads the ID token claims.
-1. The `checkAndSetActiveAccount` method checks and sets the active account. This is common when the app interacts with multiple Azure AD B2C user flows or custom policies.
-1. The `getClaims` method gets the ID token claims from the active MSAL account object. Then adds them to the `dataSource` array. The array is rendered to the user with the component's template binding.
+1. Subscribes to the [MSAL MsalBroadcastService](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/events.md) `inProgress$` observable event. The event loads the account and reads the ID token claims.
+1. Ensures that the `checkAndSetActiveAccount` method checks and sets the active account. This action is common when the app interacts with multiple Azure AD B2C user flows or custom policies.
+1. Ensures that the `getClaims` method gets the ID token claims from the active MSAL account object. The method then adds the claims to the `dataSource` array. The array is rendered to the user with the component's template binding.
```typescript import { Component, OnInit } from '@angular/core';
export class Claim {
} ```
-In the *src/app/profile* folder, update the *profile.component.html* with the following HTML snippet.
+In the *src/app/profile* folder, update *profile.component.html* with the following HTML snippet:
```html <h1>ID token claims:</h1>
In the *src/app/profile* folder, update the *profile.component.html* with the fo
## Call a web API
-To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs to have a valid access token. The [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) to known protected resources.
+To call a [token-based authorization web API](enable-authentication-web-api.md), the app needs to have a valid access token. The [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider automatically acquires tokens for outgoing requests that use the Angular [HttpClient](https://angular.io/api/common/http/HttpClient) class to known protected resources.
> [!IMPORTANT]
-> The MSAL initialization method (in the *app.module.ts* class) maps protected resources, such as web APIs with the required app scopes using the `protectedResourceMap` object. If your code needs to call another web API, add the web API URI, the web API HTTP method, with the corresponding scopes to the `protectedResourceMap` object. For more information, see [Protected Resource Map](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/master/lib/msal-angular/docs/v2-docs/msal-interceptor.md#protected-resource-map) article.
+> The MSAL initialization method (in the `app.module.ts` class) maps protected resources, such as web APIs, with the required app scopes by using the `protectedResourceMap` object. If your code needs to call another web API, add the web API URI and the web API HTTP method, with the corresponding scopes, to the `protectedResourceMap` object. For more information, see [Protected Resource Map](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/master/lib/msal-angular/docs/v2-docs/msal-interceptor.md#protected-resource-map).
When the [HttpClient](https://angular.io/api/common/http/HttpClient) object calls a web API, the [MsalInterceptor](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/msal-interceptor.md) provider takes the following steps: 1. Acquires an access token with the required permissions (scopes) for the web API endpoint.
-1. Passes the access token as a bearer token in the authorization header of the HTTP request using this format:
+1. Passes the access token as a bearer token in the authorization header of the HTTP request by using this format:
-```http
-Authorization: Bearer <access-token>
-```
+ ```http
+ Authorization: Bearer <access-token>
+ ```
-The `webapi.component` demonstrates how to call a web API. In the *src/app/webapi* folder, update the *webapi.component.ts* with the following code snippet.
+The *webapi.component* file demonstrates how to call a web API. In the *src/app/webapi* folder, update *webapi.component.ts* with the following code snippet.
-The following code:
+The code:
-1. Uses the Angular [HttpClient](https://angular.io/guide/http) to call the web API.
-1. Reads the `auth-config` class's `protectedResources.todoListApi.endpoint`. This element specifies the web API URI. Based on the web API URI, the MSAL interceptor acquires an access token with the corresponding scopes.
-1. Gets the profile from the web API, and sets the `profile` class variable.
+1. Uses the Angular [HttpClient](https://angular.io/guide/http) class to call the web API.
+1. Reads the `auth-config` class's `protectedResources.todoListApi.endpoint` element. This element specifies the web API URI. Based on the web API URI, the MSAL interceptor acquires an access token with the corresponding scopes.
+1. Gets the profile from the web API and sets the `profile` class variable.
```typescript import { Component, OnInit } from '@angular/core';
export class WebapiComponent implements OnInit {
} ```
-In the *src/app/webapi* folder, update *webapi.component.html* with the following HTML snippet. The component's template renders the `name` that returned by the web API. At the bottom of the page, the template renders the web API address.
+In the *src/app/webapi* folder, update *webapi.component.html* with the following HTML snippet. The component's template renders the name that the web API returns. At the bottom of the page, the template renders the web API address.
```html <h1>The web API returns:</h1>
In the *src/app/webapi* folder, update *webapi.component.html* with the followin
</div> ```
-Optionally, update the *webapi.component.css* file with the following CSS snippet.
+Optionally, update the *webapi.component.css* file with the following CSS snippet:
```css .footer-text {
Optionally, update the *webapi.component.css* file with the following CSS snippe
## Run the Angular application
-Run the following commands:
+Run the following command:
```console npm start ```
-The console window displays the port number of where the application is hosted.
+The console window displays the number of the port where the application is hosted.
```console Listening on port 4200... ``` > [!TIP]
-> Alternatively to run the `npm start` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
+> Alternatively, to run the `npm start` command, use the [Visual Studio Code debugger](https://code.visualstudio.com/docs/editor/debugging). The debugger helps accelerate your edit, compile, and debug loop.
-Navigate to `http://localhost:4200` in your browser to view the application.
+Go to `http://localhost:4200` in your browser to view the application.
## Next steps
-* Configure [Authentication options in your own Angular application using Azure AD B2C](enable-authentication-angular-spa-app-options.md)
+* [Configure authentication options in your own Angular application by using Azure AD B2C](enable-authentication-angular-spa-app-options.md)
* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-akamai.md
Akamai WAF integration includes the following components:
|:--|:--| | Origin type | Your origin | | Origin server hostname | yourafddomain.azurefd.net |
-| Forward host header | Origin hostname |
-| Cache key hostname| Origin hostname |
+| Forward host header | Incomming Host Header |
+| Cache key hostname| Incomming Host Header |
### Configure DNS
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider-options.md
Title: Configure SAML service provider options title-suffix: Azure Active Directory B2C
-description: How to configure Azure Active Directory B2C SAML service provider options
+description: Learn how to configure Azure Active Directory B2C SAML service provider options.
zone_pivot_groups: b2c-policy-type
# Options for registering a SAML application in Azure AD B2C
-This article describes the configuration options that are available when connecting Azure Active Directory (Azure AD B2C) with your SAML application.
+This article describes the configuration options that are available when you're connecting Azure Active Directory B2C (Azure AD B2C) with your Security Assertion Markup Language (SAML) application.
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
This article describes the configuration options that are available when connect
::: zone pivot="b2c-custom-policy"
-## SAML response signature
+## Specify a SAML response signature
You can specify a certificate to be used to sign the SAML messages. The message is the `<samlp:Response>` element within the SAML response sent to the application.
-If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlMessageSigning` Metadata item in the SAML Token Issuer technical profile. The `StorageReferenceId` must reference the Policy Key name.
+If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlMessageSigning` metadata item in the SAML Token Issuer technical profile. `StorageReferenceId` must reference the policy key name.
```xml <ClaimsProvider>
If you don't already have a policy key, [create one](saml-service-provider.md#cr
</TechnicalProfile> ```
-### SAML response signature algorithm
+### Signature algorithm
-You can configure the signature algorithm used to sign the SAML assertion. Possible values are `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure the technical profile and application use the same signature algorithm. Use only the algorithm that your certificate supports.
+You can configure the signature algorithm that's used to sign the SAML assertion. Possible values are `Sha256`, `Sha384`, `Sha512`, or `Sha1`. Make sure the technical profile and application use the same signature algorithm. Use only the algorithm that your certificate supports.
-Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key within the relying party Metadata element.
+Configure the signature algorithm by using the `XmlSignatureAlgorithm` metadata key within the relying party `Metadata` element.
```xml <RelyingParty>
Configure the signature algorithm using the `XmlSignatureAlgorithm` metadata key
</RelyingParty> ```
-## SAML assertions signature
+## Check the SAML assertion signature
-When your application expects SAML assertion section to be signed, make sure the SAML service provider set the `WantAssertionsSigned` to `true`. If set to `false`, or doesn't exist, the assertion section won't be sign. The following example shows a SAML service provider metadata with the `WantAssertionsSigned` set to `true`.
+When your application expects the SAML assertion section to be signed, make sure the SAML service provider set the `WantAssertionsSigned` to `true`. If it's set to `false` or doesn't exist, the assertion section won't be signed.
+
+The following example shows metadata for a SAML service provider, with `WantAssertionsSigned` set to `true`.
```xml <EntityDescriptor ID="id123456789" entityID="https://samltestapp2.azurewebsites.net" validUntil="2099-12-31T23:59:59Z" xmlns="urn:oasis:names:tc:SAML:2.0:metadata">
When your application expects SAML assertion section to be signed, make sure the
</EntityDescriptor> ```
-### SAML assertions signature certificate
+### Signature certificate
-Your policy must specify a certificate to be used to sign the SAML assertions section of the SAML response. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlAssertionSigning` Metadata item in the SAML Token Issuer technical profile. The `StorageReferenceId` must reference the Policy Key name.
+Your policy must specify a certificate to be used to sign the SAML assertions section of the SAML response. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `SamlAssertionSigning` metadata item in the SAML Token Issuer technical profile. `StorageReferenceId` must reference the policy key name.
```xml <ClaimsProvider>
Your policy must specify a certificate to be used to sign the SAML assertions se
</TechnicalProfile> ```
-## SAML assertions encryption
+## Enable encryption in SAML assertions
-When your application expects SAML assertions to be in an encrypted format, you need to make sure that encryption is enabled in the Azure AD B2C policy.
+When your application expects SAML assertions to be in an encrypted format, make sure that encryption is enabled in the Azure AD B2C policy.
-Azure AD B2C uses the service provider's public key certificate to encrypt the SAML assertion. The public key must exist in the SAML application's metadata endpoint with the KeyDescriptor 'use' set to 'Encryption', as shown in the following example:
+Azure AD B2C uses the service provider's public key certificate to encrypt the SAML assertion. The public key must exist in the SAML application's metadata endpoint with the `KeyDescriptor` `use` value set to `Encryption`, as shown in the following example:
```xml <KeyDescriptor use="encryption">
Azure AD B2C uses the service provider's public key certificate to encrypt the S
</KeyDescriptor> ```
-To enable Azure AD B2C to send encrypted assertions, set the **WantsEncryptedAssertion** metadata item to `true` in the [relying party technical profile](relyingparty.md#technicalprofile). You can also configure the algorithm used to encrypt the SAML assertion.
+To enable Azure AD B2C to send encrypted assertions, set the `WantsEncryptedAssertion` metadata item to `true` in the [relying party technical profile](relyingparty.md#technicalprofile). You can also configure the algorithm that's used to encrypt the SAML assertion.
```xml <RelyingParty>
To enable Azure AD B2C to send encrypted assertions, set the **WantsEncryptedAss
### Encryption method
-To configure the encryption method used to encrypt the SAML assertion data, set the `DataEncryptionMethod` metadata key within the relying party. Possible values are `Aes256` (default), `Aes192`, `Sha512`, or `Aes128`. The metadata controls the value of the `<EncryptedData>` element in the SAML response.
+To configure the encryption method that's used to encrypt the SAML assertion data, set the `DataEncryptionMethod` metadata key within the relying party. Possible values are `Aes256` (default), `Aes192`, `Sha512`, or `Aes128`. The metadata controls the value of the `<EncryptedData>` element in the SAML response.
+
+To configure the encryption method for encrypting the copy of the key that was used to encrypt the SAML assertion data, set the `KeyEncryptionMethod` metadata key within the relying party. Possible values are:
+
+- `Rsa15` (default): RSA Public Key Cryptography Standard (PKCS) Version 1.5 algorithm.
+- `RsaOaep`: RSA Optimal Asymmetric Encryption Padding (OAEP) encryption algorithm.
-To configure the encryption method used to encrypt the copy of the key, that was used to encrypt the SAML assertion data, set the `KeyEncryptionMethod` metadata key within the relying party. Possible values are `Rsa15` (default) - RSA Public Key Cryptography Standard (PKCS) Version 1.5 algorithm, and `RsaOaep` - RSA Optimal Asymmetric Encryption Padding (OAEP) encryption algorithm. The metadata controls the value of the `<EncryptedKey>` element in the SAML response.
+The metadata controls the value of the `<EncryptedKey>` element in the SAML response.
The following example shows the `EncryptedAssertion` section of a SAML assertion. The encrypted data method is `Aes128`, and the encrypted key method is `Rsa15`.
The following example shows the `EncryptedAssertion` section of a SAML assertion
</saml:EncryptedAssertion> ```
-You can change the format of the encrypted assertions. To configure the encryption format, set the `UseDetachedKeys` metadata key within the relying party. Possible values: `true`, or `false` (default). When the value is set to `true`, the detached keys add the encrypted assertion as a child of the `EncrytedAssertion` as opposed to the `EncryptedData`.
+You can change the format of the encrypted assertions. To configure the encryption format, set the `UseDetachedKeys` metadata key within the relying party. Possible values: `true` or `false` (default). When the value is set to `true`, the detached keys add the encrypted assertion as a child of `EncryptedAssertion` instead of `EncryptedData`.
-Configure the encryption method and format, use the metadata keys within the [relying party technical profile](relyingparty.md#technicalprofile):
+Configure the encryption method and format by using the metadata keys within the [relying party technical profile](relyingparty.md#technicalprofile):
```xml <RelyingParty>
Configure the encryption method and format, use the metadata keys within the [re
</RelyingParty> ```
-## Identity provider-initiated flow
+## Configure IdP-initiated flow
-When your application expects to receive a SAML assertion without first sending a SAML AuthN request to the identity provider, you must configure Azure AD B2C for identity provider-initiated flow.
+When your application expects to receive a SAML assertion without first sending a SAML AuthN request to the identity provider (IdP), you must configure Azure AD B2C for IdP-initiated flow.
-In identity provider-initiated flow, the sign-in process is initiated by the identity provider (Azure AD B2C), which sends an unsolicited SAML response to the service provider (your relying party application).
+In IdP-initiated flow, the identity provider (Azure AD B2C) starts the sign-in process. The identity provider sends an unsolicited SAML response to the service provider (your relying party application).
-We don't currently support scenarios where the initiating identity provider is an external identity provider federated with Azure AD B2C, for example [AD-FS](identity-provider-adfs.md), or [Salesforce](identity-provider-salesforce-saml.md). It is only supported for Azure AD B2C local account authentication.
+We don't currently support scenarios where the initiating identity provider is an external identity provider federated with Azure AD B2C, such as [Active Directory Federation Services](identity-provider-adfs.md) or [Salesforce](identity-provider-salesforce-saml.md). IdP-initiated flow is supported only for local account authentication in Azure AD B2C.
-To enable identity provider-initiated flow, set the **IdpInitiatedProfileEnabled** metadata item to `true` in the [relying party technical profile](relyingparty.md#technicalprofile).
+To enable IdP-initiated flow, set the `IdpInitiatedProfileEnabled` metadata item to `true` in the [relying party technical profile](relyingparty.md#technicalprofile).
```xml <RelyingParty>
To enable identity provider-initiated flow, set the **IdpInitiatedProfileEnabled
</RelyingParty> ```
-To sign in or sign up a user through identity provider-initiated flow, use the following URL:
+To sign in or sign up a user through IdP-initiated flow, use the following URL:
``` https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/generic/login?EntityId=app-identifier-uri
https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/g
Replace the following values:
-* **tenant-name** with your tenant name
-* **policy-name** with your SAML relying party policy name
-* **app-identifier-uri** with the `identifierUris` in the metadata file, such as `https://contoso.onmicrosoft.com/app-name`
+* Replace `<tenant-name>` with your tenant name.
+* Replace `<policy-name>` with the name of your SAML relying party policy.
+* Replace `app-identifier-uri` with the `identifierUris` value in the metadata file, such as `https://contoso.onmicrosoft.com/app-name`.
### Sample policy
-We provide a complete sample policy that you can use for testing with the SAML test app.
+You can use a complete sample policy for testing with the SAML test app:
1. Download the [SAML-SP-initiated login sample policy](https://github.com/azure-ad-b2c/saml-sp/tree/master/policy/SAML-SP-Initiated).
-1. Update `TenantId` to match your tenant name, for example *contoso.b2clogin.com*.
+1. Update `TenantId` to match your tenant name. This article uses the example *contoso.b2clogin.com*.
1. Keep the policy name *B2C_1A_signup_signin_saml*.
-## SAML response lifetime
+## Configure the SAML response lifetime
-You can configure the length of time the SAML response remains valid. Set the lifetime using the `TokenLifeTimeInSeconds` metadata item within the SAML Token Issuer technical profile. This value is the number of seconds that can elapse from the `NotBefore` timestamp calculated at the token issuance time. The default lifetime is 300 seconds (5 minutes).
+You can configure the length of time that the SAML response remains valid. Set the lifetime by using the `TokenLifeTimeInSeconds` metadata item within the SAML Token Issuer technical profile. This value is the number of seconds that can elapse from the `NotBefore` time stamp, calculated at the token issuance time. The default lifetime is 300 seconds (five minutes).
```xml <ClaimsProvider>
You can configure the length of time the SAML response remains valid. Set the li
</TechnicalProfile> ```
-## SAML response valid from skew
+## Configure the time skew of a SAML response
-You can configure the time skew applied to the SAML response `NotBefore` timestamp. This configuration ensures that if the times between two platforms aren't in sync, the SAML assertion will still be deemed valid when within this time skew.
+You can configure the time skew applied to the SAML response `NotBefore` time stamp. This configuration ensures that if the times between two platforms aren't in sync, the SAML assertion will still be deemed valid when it's within this time skew.
-Set the time skew using the `TokenNotBeforeSkewInSeconds` metadata item within the SAML Token Issuer technical profile. The skew value is given in seconds, with a default value of 0. The maximum value is 3600 (one hour).
+Set the time skew by using the `TokenNotBeforeSkewInSeconds` metadata item within the SAML Token Issuer technical profile. The skew value is given in seconds, with a default value of 0. The maximum value is 3600 (one hour).
-For example, when the `TokenNotBeforeSkewInSeconds` is set to `120` seconds:
+For example, when `TokenNotBeforeSkewInSeconds` is set to `120` seconds:
-- The token is issued at 13:05:10 UTC-- The token is valid from 13:03:10 UTC
+- The token is issued at 13:05:10 UTC.
+- The token is valid from 13:03:10 UTC.
```xml <ClaimsProvider>
For example, when the `TokenNotBeforeSkewInSeconds` is set to `120` seconds:
</TechnicalProfile> ```
-## Remove milliseconds from date and time
+## Remove milliseconds from the date and time
-You can specify whether the milliseconds will be removed from datetime values within the SAML response (these include IssueInstant, NotBefore, NotOnOrAfter, and AuthnInstant). To remove the milliseconds, set the `RemoveMillisecondsFromDateTime
-` metadata key within the relying party. Possible values: `false` (default) or `true`.
+You can specify whether milliseconds will be removed from date and time values within the SAML response. (These values include `IssueInstant`, `NotBefore`, `NotOnOrAfter`, and `AuthnInstant`.) To remove the milliseconds, set the `RemoveMillisecondsFromDateTime` metadata key within the relying party. Possible values: `false` (default) or `true`.
```xml <ClaimsProvider>
You can specify whether the milliseconds will be removed from datetime values wi
</TechnicalProfile> ```
-## Azure AD B2C issuer ID
+## Use an issuer ID to override an issuer URI
-If you have multiple SAML applications that depend on different `entityID` values, you can override the `issueruri` value in your relying party file. To override the issuer URI, copy the technical profile with the "Saml2AssertionIssuer" ID from the base file and override the `issueruri` value.
+If you have multiple SAML applications that depend on different `entityID` values, you can override the `IssuerUri` value in your relying party file. To override the issuer URI, copy the technical profile with the `Saml2AssertionIssuer` ID from the base file and override the `IssuerUri` value.
> [!TIP] > Copy the `<ClaimsProviders>` section from the base and preserve these elements within the claims provider: `<DisplayName>Token Issuer</DisplayName>`, `<TechnicalProfile Id="Saml2AssertionIssuer">`, and `<DisplayName>Token Issuer</DisplayName>`.
Example:
… ```
-## Session management
+## Manage a session
-You can manage the session between Azure AD B2C and the SAML relying party application using the `UseTechnicalProfileForSessionManagement` element and the [SamlSSOSessionProvider](custom-policy-reference-sso.md#samlssosessionprovider).
+You can manage the session between Azure AD B2C and the SAML relying party application by using the `UseTechnicalProfileForSessionManagement` element and the [SamlSSOSessionProvider](custom-policy-reference-sso.md#samlssosessionprovider).
## Force users to reauthenticate
-To force users to reauthenticate, the application can include the `ForceAuthn` attribute in the SAML authentication request. The `ForceAuthn` attribute is a Boolean value. When set to true, the users' session will be invalidated at Azure AD B2C, and the user is forced to reauthenticate. The following SAML authentication request demonstrates how to set the `ForceAuthn` attribute to true.
+To force users to reauthenticate, the application can include the `ForceAuthn` attribute in the SAML authentication request. The `ForceAuthn` attribute is a Boolean value. When it's set to `true`, the user's session will be invalidated at Azure AD B2C, and the user is forced to reauthenticate.
+The following SAML authentication request demonstrates how to set the `ForceAuthn` attribute to `true`.
```xml <samlp:AuthnRequest
To force users to reauthenticate, the application can include the `ForceAuthn` a
</samlp:AuthnRequest> ```
-## Sign the Azure AD B2C IdP SAML Metadata
+## Sign the Azure AD B2C IdP SAML metadata
-You can instruct Azure AD B2C to sign its SAML IdP metadata document, if required by the application. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `MetadataSigning` metadata item in the SAML token issuer technical profile. The `StorageReferenceId` must reference the policy key name.
+You can instruct Azure AD B2C to sign its metadata document for the SAML identity provider, if the application requires it. If you don't already have a policy key, [create one](saml-service-provider.md#create-a-policy-key). Then configure the `MetadataSigning` metadata item in the SAML Token Issuer technical profile. `StorageReferenceId` must reference the policy key name.
```xml <ClaimsProvider>
You can instruct Azure AD B2C to sign its SAML IdP metadata document, if require
## Debug the SAML protocol
-To help configure and debug the integration with your service provider, you can use a browser extension for the SAML protocol, for example, [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
+To help configure and debug the integration with your service provider, you can use a browser extension for the SAML protocol. Browser extensions include the [SAML DevTools extension for Chrome](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio), [SAML-tracer for Firefox](https://addons.mozilla.org/es/firefox/addon/saml-tracer/), and [Developer tools for Edge or Internet Explorer](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
-Using these tools, you can check the integration between your application and Azure AD B2C. For example:
+By using these tools, you can check the integration between your application and Azure AD B2C. For example:
* Check whether the SAML request contains a signature and determine what algorithm is used to sign in the authorization request. * Check if Azure AD B2C returns an error message.
-* Check it the assertion section is encrypted.
+* Check if the assertion section is encrypted.
## Next steps -- Find more information about the [SAML protocol on the OASIS website](https://www.oasis-open.org/).-- Get the SAML test web app from the [Azure AD B2C GitHub community repo](https://github.com/azure-ad-b2c/saml-sp-tester).
+- Find more information about the SAML protocol on the [OASIS website](https://www.oasis-open.org/).
+- Get the SAML test web app from the [Azure AD B2C GitHub community repository](https://github.com/azure-ad-b2c/saml-sp-tester).
<!-- LINKS - External --> [samltest]: https://aka.ms/samltestapp
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider.md
Title: Configure Azure Active Directory B2C as a SAML IdP to your applications title-suffix: Azure Active Directory B2C
-description: How to configure Azure Active Directory B2C to provide SAML protocol assertions to your applications (service providers). Azure AD B2C will act as the single identity provider (IdP) to your SAML application.
+description: Learn how to configure Azure Active Directory B2C to provide SAML protocol assertions to your applications (service providers).
In this article, learn how to connect your Security Assertion Markup Language (S
## Overview
-Organizations that use Azure AD B2C as their customer identity and access management solution might require integration with applications that authenticate using the SAML protocol. The following diagram shows how Azure AD B2C serves as an *identity provider* (IdP) to achieve single-sign-on (SSO) with SAML-based applications.
+Organizations that use Azure AD B2C as their customer identity and access management solution might require integration with applications that authenticate by using the SAML protocol. The following diagram shows how Azure AD B2C serves as an *identity provider* (IdP) to achieve single-sign-on (SSO) with SAML-based applications.
-![Diagram with B2C as identity provider on left and B2C as service provider on right.](media/saml-service-provider/saml-service-provider-integration.png)
+![Diagram with Azure Active Directory B 2 C as an identity provider on the left and as a service provider on the right.](media/saml-service-provider/saml-service-provider-integration.png)
-1. The application creates a SAML AuthN Request that is sent to Azure AD B2C's SAML login endpoint.
+1. The application creates a SAML AuthN request that's sent to the SAML login endpoint for Azure AD B2C.
2. The user can use an Azure AD B2C local account or any other federated identity provider (if configured) to authenticate.
-3. If the user signs in using a federated identity provider, a token response is sent to Azure AD B2C.
+3. If the user signs in by using a federated identity provider, a token response is sent to Azure AD B2C.
4. Azure AD B2C generates a SAML assertion and sends it to the application. ## Prerequisites
-* Complete the steps in [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You need the *SocialAndLocalAccounts* custom policy from the custom policy starter pack discussed in the article.
-* Basic understanding of the SAML protocol and familiarity with the application's SAML implementation.
-* A web application configured as a SAML application. For this tutorial, you can use a [SAML test application][samltest] that we provide.
+For the scenario in this article, you need:
-## Components
+* The *SocialAndLocalAccounts* custom policy from a custom policy starter pack. Complete the steps in [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
+* A basic understanding of the SAML protocol and familiarity with the application's SAML implementation.
+* A web application configured as a SAML application. It must have the ability to send SAML AuthN requests and to receive, decode, and verify SAML responses from Azure AD B2C. The SAML application is also known as the relying party application or service provider.
+* The SAML application's publicly available SAML *metadata endpoint* or XML document.
+* An [Azure AD B2C tenant](tutorial-create-tenant.md).
-There are three main components required for this scenario:
-
-* A SAML **application** with the ability to send SAML AuthN requests and receive, decode, and verify SAML responses from Azure AD B2C. The SAML application is also known as the relying party application or service provider.
-* The SAML application's publicly available SAML **metadata endpoint** or XML document.
-* An [Azure AD B2C tenant](tutorial-create-tenant.md)
-
-If you don't yet have a SAML application and an associated metadata endpoint, you can use this sample SAML application that we've made available for testing:
-
-[SAML Test Application][samltest]
+If you don't yet have a SAML application and an associated metadata endpoint, you can use the [SAML test application][samltest] that we've made available for testing.
[!INCLUDE [active-directory-b2c-https-cipher-tls-requirements](../../includes/active-directory-b2c-https-cipher-tls-requirements.md)] ## Set up certificates
-To build a trust relationship between your application and Azure AD B2C, both services must be able to create and validate each other's signatures. You configure a configure X509 certificates in Azure AD B2C, and your application.
+To build a trust relationship between your application and Azure AD B2C, both services must be able to create and validate each other's signatures. Configure X509 certificates in your application and in Azure AD B2C.
**Application certificates** | Usage | Required | Description | | | -- | -- |
-| SAML request signing | No | A certificate with a private key stored in your web app, used by your application to sign SAML requests sent to Azure AD B2C. The web app must expose the public key through its SAML metadata endpoint. Azure AD B2C validates the SAML request signature by using the public key from the application metadata.|
-| SAML assertion encryption | No | A certificate with a private key stored in your web app. The web app must expose the public key through its SAML metadata endpoint. Azure AD B2C can encrypt assertions to your application using the public key. The application uses the private key to decrypt the assertion.|
+| SAML request signing | No | A certificate with a private key stored in your web app. Your application uses the certificate to sign SAML requests sent to Azure AD B2C. The web app must expose the public key through its SAML metadata endpoint. Azure AD B2C validates the SAML request signature by using the public key from the application metadata.|
+| SAML assertion encryption | No | A certificate with a private key stored in your web app. The web app must expose the public key through its SAML metadata endpoint. Azure AD B2C can encrypt assertions to your application by using the public key. The application uses the private key to decrypt the assertion.|
**Azure AD B2C certificates** | Usage | Required | Description | | | -- | -- |
-| SAML response signing | Yes | A certificate with a private key stored in Azure AD B2C. This certificate is used by Azure AD B2C to sign the SAML response sent to your application. Your application reads the Azure AD B2C metadata public key to validate the signature of the SAML response. |
-| SAML assertion signing | Yes | A certificate with a private key stored in Azure AD B2C. This certificate is used by Azure AD B2C to sign the SAML response's assertion. The `<saml:Assertion>` part of the SAML response. |
+| SAML response signing | Yes | A certificate with a private key stored in Azure AD B2C. Azure AD B2C uses this certificate to sign the SAML response sent to your application. Your application reads the metadata public key in Azure AD B2C to validate the signature of the SAML response. |
+| SAML assertion signing | Yes | A certificate with a private key stored in Azure AD B2C. Azure AD B2C uses this certificate to sign the `<saml:Assertion>` part of the SAML response. |
-In a production environment, we recommend using certificates issued by a public certificate authority. However, you can also complete this procedure with self-signed certificates.
+In a production environment, we recommend using certificates that a public certificate authority has issued. But you can also complete this procedure with self-signed certificates.
### Create a policy key
-To have a trust relationship between your application and Azure AD B2C, create a SAML response signing certificate. Azure AD B2C uses this certificate to sign the SAML response sent to your application. Your application reads the Azure AD B2C metadata public key to validate the signature of the SAML response.
+To have a trust relationship between your application and Azure AD B2C, create a signing certificate for the SAML response. Azure AD B2C uses this certificate to sign the SAML response sent to your application. Your application reads the metadata public key for Azure AD B2C to validate the signature of the SAML response.
> [!TIP]
-> You can use the policy key that you create in this section, for other purposes, such as sign-in the [SAML assertion](saml-service-provider-options.md#saml-assertions-signature).
+> You can use this policy key for other purposes, such as signing the [SAML assertion](saml-service-provider-options.md#check-the-saml-assertion-signature).
### Obtain a certificate
To have a trust relationship between your application and Azure AD B2C, create a
You need to store your certificate in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
-1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-1. On the Overview page, select **Identity Experience Framework**.
-1. Select **Policy Keys** and then select **Add**.
-1. For **Options**, choose `Upload`.
-1. Enter a **Name** for the policy key. For example, `SamlIdpCert`. The prefix `B2C_1A_` is added automatically to the name of your key.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter on the top menu and choose the directory that contains your tenant.
+1. Select **All services** in the upper-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. On the **Overview** page, select **Identity Experience Framework**.
+1. Select **Policy Keys**, and then select **Add**.
+1. For **Options**, select **Upload**.
+1. For **Name**, enter a name for the policy key. For example, enter **SamlIdpCert**. The prefix **B2C_1A_** is added automatically to the name of your key.
1. Browse to and select your certificate .pfx file with the private key.
-1. Click **Create**.
+1. Select **Create**.
## Enable your policy to connect with a SAML application To connect to your SAML application, Azure AD B2C must be able to create SAML responses.
-Open `SocialAndLocalAccounts\`**`TrustFrameworkExtensions.xml`** in the custom policy starter pack.
+Open *SocialAndLocalAccounts\TrustFrameworkExtensions.xml* in the custom policy starter pack.
-Locate the `<ClaimsProviders>` section and add the following XML snippet to implement your SAML response generator.
+Find the `<ClaimsProviders>` section and add the following XML snippet to implement your SAML response generator:
```xml <ClaimsProvider>
Locate the `<ClaimsProviders>` section and add the following XML snippet to impl
<UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-issuer"/> </TechnicalProfile>
- <!-- Session management technical profile for SAML based tokens -->
+ <!-- Session management technical profile for SAML-based tokens -->
<TechnicalProfile Id="SM-Saml-issuer"> <DisplayName>Session Management Provider</DisplayName> <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.SamlSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>
Locate the `<ClaimsProviders>` section and add the following XML snippet to impl
</ClaimsProvider> ```
-#### Configure the IssuerUri of the SAML response
+#### Configure the issuer URI of the SAML response
-You can change the value of the `IssuerUri` metadata item in the SAML token issuer technical profile. This change will be reflected in the `issuerUri` attribute returned in the SAML response from Azure AD B2C. Your application should be configured to accept the same `issuerUri` during SAML response validation.
+You can change the value of the `IssuerUri` metadata item in the SAML Token Issuer technical profile. This change will be reflected in the `issuerUri` attribute returned in the SAML response from Azure AD B2C. Configure your application to accept the same `IssuerUri` value during SAML response validation.
```xml <ClaimsProvider>
You can change the value of the `IssuerUri` metadata item in the SAML token issu
</TechnicalProfile> ```
-## Configure your policy to issue a SAML Response
+## Configure your policy to issue a SAML response
Now that your policy can create SAML responses, you must configure the policy to issue a SAML response instead of the default JWT response to your application. ### Create a sign-up or sign-in policy configured for SAML
-1. Create a copy of the *SignUpOrSignin.xml* file in your starter pack working directory and save it with a new name. For example, *SignUpOrSigninSAML.xml*. This file is your relying party policy file, and it is configured to issue a JWT response by default.
+1. Create a copy of the *SignUpOrSignin.xml* file in your starter pack's working directory and save it with a new name. This article uses *SignUpOrSigninSAML.xml* as an example. This file is your policy file for the relying party. It's configured to issue a JWT response by default.
1. Open the *SignUpOrSigninSAML.xml* file in your preferred editor.
-1. Change the `PolicyId` and `PublicPolicyUri` of the policy to _B2C_1A_signup_signin_saml_ and `http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml` as seen below.
+1. Change the `PolicyId` and `PublicPolicyUri` values of the policy to `_B2C_1A_signup_signin_saml_` and `http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml`.
```xml <TrustFrameworkPolicy
Now that your policy can create SAML responses, you must configure the policy to
PublicPolicyUri="http://<tenant-name>.onmicrosoft.com/B2C_1A_signup_signin_saml"> ```
-1. At the end of the User Journey, Azure AD B2C contains a `SendClaims` step. This step references the Token Issuer Technical Profile. To issue a SAML response rather than the default JWT response, modify the `SendClaims` step to reference the new SAML Token issuer technical profile, `Saml2AssertionIssuer`.
+1. At the end of the user journey, Azure AD B2C contains a `SendClaims` step. This step references the Token Issuer technical profile. To issue a SAML response rather than the default JWT response, modify the `SendClaims` step to reference the new SAML Token Issuer technical profile, `Saml2AssertionIssuer`.
+
+Add the following XML snippet just before the `<RelyingParty>` element. This XML overwrites orchestration step 7 in the _SignUpOrSignIn_ user journey.
-Add the following XML snippet just before the `<RelyingParty>` element. This XML overwrites orchestration step number 7 in the _SignUpOrSignIn_ user journey. If you started from a different folder in the starter pack or you customized the user journey by adding or removing orchestration steps, make sure the number in the `order` element corresponds to the number specified in the user journey for the token issuer step. For example, in the other starter pack folders, the corresponding step number is 4 for `LocalAccounts`, 6 for `SocialAccounts` and 9 for `SocialAndLocalAccountsWithMfa`).
+If you started from a different folder in the starter pack or you customized the user journey by adding or removing orchestration steps, make sure the number in the `order` element corresponds to the number specified in the user journey for the token issuer step. For example, in the other starter pack folders, the corresponding step number is 4 for `LocalAccounts`, 6 for `SocialAccounts`, and 9 for `SocialAndLocalAccountsWithMfa`.
```xml <UserJourneys>
Add the following XML snippet just before the `<RelyingParty>` element. This XML
</UserJourneys> ```
-The relying party element determines which protocol your application uses. The default is `OpenId`. The `Protocol` element must be changed to `SAML`. The Output Claims will create the claims mapping to the SAML assertion.
+The relying party element determines which protocol your application uses. The default is `OpenId`. The `Protocol` element must be changed to `SAML`. The output claims will create the claims mapping to the SAML assertion.
Replace the entire `<TechnicalProfile>` element in the `<RelyingParty>` element with the following technical profile XML. Update `tenant-name` with the name of your Azure AD B2C tenant.
Replace the entire `<TechnicalProfile>` element in the `<RelyingParty>` element
</TechnicalProfile> ```
-Your final relying party policy file should look like the following XML code:
+Your final policy file for the relying party should look like the following XML code:
```xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
Your final relying party policy file should look like the following XML code:
``` > [!NOTE]
-> You can follow this same process to implement other types of user flows (for example sign-in, password reset, or profile editing flows).
+> You can follow this same process to implement other types of user flows (for example: sign-in, password reset, or profile editing flows).
### Upload your policy
-Save your changes and upload the new **TrustFrameworkExtensions.xml** and **SignUpOrSigninSAML.xml** policy files to the Azure portal.
+Save your changes and upload the new *TrustFrameworkExtensions.xml* and *SignUpOrSigninSAML.xml* policy files to the Azure portal.
-### Test the Azure AD B2C IdP SAML Metadata
+### Test the Azure AD B2C IdP SAML metadata
-After the policy files are uploaded, Azure AD B2C uses the configuration information to generate the identity providerΓÇÖs SAML metadata document to be used by the application. The SAML metadata document contains the locations of services, such as sign-in and logout methods, certificates, and so on.
+After the policy files are uploaded, Azure AD B2C uses the configuration information to generate the identity provider's SAML metadata document that the application will use. The SAML metadata document contains the locations of services, such as sign-in methods, logout methods, and certificates.
The Azure AD B2C policy metadata is available at the following URL: `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/samlp/metadata`
-Replace `<tenant-name>` with the name of your Azure AD B2C tenant and `<policy-name>` with the name (ID) of the policy, for example:
+Replace `<tenant-name>` with the name of your Azure AD B2C tenant. Replace `<policy-name>` with the name (ID) of the policy. Here's an example:
`https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_signup_signin_saml/samlp/metadata` ## Register your SAML application in Azure AD B2C
-For Azure AD B2C to trust your application, you create an Azure AD B2C application registration, which contains configuration information such as the application's metadata endpoint.
+For Azure AD B2C to trust your application, you create an Azure AD B2C application registration. The registration contains configuration information, such as the application's metadata endpoint.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
-1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
+1. Select the **Directory + subscription** filter on the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. On the left menu, select **Azure AD B2C**. Or, select **All services** and then search for and select **Azure AD B2C**.
1. Select **App registrations**, and then select **New registration**.
-1. Enter a **Name** for the application. For example, *SAMLApp1*.
+1. Enter a **Name** for the application. For example, enter **SAMLApp1**.
1. Under **Supported account types**, select **Accounts in this organizational directory only**. 1. Under **Redirect URI**, select **Web**, and then enter `https://localhost`. You'll modify this value later in the application registration's manifest. 1. Select **Register**. ### Configure your application in Azure AD B2C
-For SAML apps, you'll need to configure several properties in the application registration's manifest.
+For SAML apps, you need to configure several properties in the application registration's manifest.
-1. In the [Azure portal](https://portal.azure.com), navigate to the application registration that you created in the previous section.
-1. Under **Manage**, select **Manifest** to open the manifest editor, and then modify the properties described in the following sections.
+1. In the [Azure portal](https://portal.azure.com), go to the application registration that you created in the previous section.
+1. Under **Manage**, select **Manifest** to open the manifest editor. Then modify the properties described in the following sections.
#### Add the identifier
-When your SAML application makes a request to Azure AD B2C, the SAML AuthN request includes an `Issuer` attribute, which is typically the same value as the application's metadata `entityID`. Azure AD B2C uses this value to look up the application registration in the directory and read the configuration. For this lookup to succeed, the `identifierUri` in the application registration must be populated with a value that matches the `Issuer` attribute.
+When your SAML application makes a request to Azure AD B2C, the SAML AuthN request includes an `Issuer` attribute. The value of this attribute is typically the same as the application's metadata `entityID` value. Azure AD B2C uses this value to look up the application registration in the directory and read the configuration. For this lookup to succeed, `identifierUri` in the application registration must be populated with a value that matches the `Issuer` attribute.
-In the registration manifest, locate the `identifierURIs` parameter and add the appropriate value. This value will be same value that is configured in the SAML AuthN requests for EntityId at the application, and the `entityID` value in the application's metadata.
+In the registration manifest, find the `identifierURIs` parameter and add the appropriate value. This value will be the same value that's configured in the SAML AuthN requests for `EntityId` at the application, and the `entityID` value in the application's metadata.
-The following example shows the `entityID` in the SAML metadata:
+The following example shows the `entityID` value in the SAML metadata:
```xml <EntityDescriptor ID="id123456789" entityID="https://samltestapp2.azurewebsites.net" validUntil="2099-12-31T23:59:59Z" xmlns="urn:oasis:names:tc:SAML:2.0:metadata"> ```
-The `identifierUris` property will only accept URLs on the domain `tenant-name.onmicrosoft.com`.
+The `identifierUris` property will accept URLs only on the domain `tenant-name.onmicrosoft.com`.
```json "identifierUris":"https://samltestapp2.azurewebsites.net",
The `identifierUris` property will only accept URLs on the domain `tenant-name.o
#### Share the application's metadata with Azure AD B2C
-After the application registration has been loaded by its `identifierUri`, Azure AD B2C uses the application's metadata to validate the SAML AuthN request and determine how to respond.
+After the application registration has been loaded by its `identifierUri` value, Azure AD B2C uses the application's metadata to validate the SAML AuthN request and determine how to respond.
-It's recommended that your application exposes a publicly accessible metadata endpoint.
+We recommend that your application exposes a publicly accessible metadata endpoint.
-If there are properties specified in *both* the SAML metadata URL and the application registration's manifest, they are *merged*. The properties specified in the metadata URL are processed first and take precedence.
+If there are properties specified in *both* the SAML metadata URL and the application registration's manifest, they're *merged*. The properties specified in the metadata URL are processed first and take precedence.
Using the SAML test application as an example, you'd use the following value for `samlMetadataUrl` in the application manifest:
Using the SAML test application as an example, you'd use the following value for
#### Override or set the assertion consumer URL (optional)
-You can configure the reply URL to which Azure AD B2C sends SAML responses. Reply URLs can be configured within the application manifest. This configuration is useful when your application doesn't expose a publicly accessible metadata endpoint.
+You can configure the reply URL to which Azure AD B2C sends SAML responses. Reply URLs can be configured in the application manifest. This configuration is useful when your application doesn't expose a publicly accessible metadata endpoint.
-The reply URL for a SAML application is the endpoint at which the application expects to receive SAML responses. The application usually provides this URL in the metadata document under the `AssertionConsumerServiceUrl` attribute, as shown below:
+The reply URL for a SAML application is the endpoint at which the application expects to receive SAML responses. The application usually provides this URL in the metadata document under the `AssertionConsumerServiceUrl` attribute, as shown in this example:
```xml <SPSSODescriptor AuthnRequestsSigned="false" WantAssertionsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
The reply URL for a SAML application is the endpoint at which the application ex
</SPSSODescriptor> ```
-If you want to override the metadata provided in the `AssertionConsumerServiceUrl` attribute or the URL isn't present in the metadata document, you can configure the URL in the manifest under the `replyUrlsWithType` property. The `BindingType` will be set to `HTTP POST`.
+If you want to override the metadata provided in the `AssertionConsumerServiceUrl` attribute or the URL isn't present in the metadata document, you can configure the URL in the manifest under the `replyUrlsWithType` property. The `BindingType` value will be set to `HTTP POST`.
-Using the SAML test application as an example, you'd set the `url` property of `replyUrlsWithType` to the value shown in the following JSON snippet.
+Using the SAML test application as an example, you'd set the `url` property of `replyUrlsWithType` to the value shown in the following JSON snippet:
```json "replyUrlsWithType":[
Using the SAML test application as an example, you'd set the `url` property of `
#### Override or set the logout URL (optional)
-You can configure the logout URL to which Azure AD B2C will send the user after a logout request. Reply URLs can be configured within the Application Manifest.
+You can configure the logout URL to which Azure AD B2C will send the user after a logout request. Reply URLs can be configured in the application manifest.
-If you want to override the metadata provided in the `SingleLogoutService` attribute or the URL isn't present in the metadata document, you can configure it in the manifest under the `Logout` property. The `BindingType` will be set to `Http-Redirect`.
+If you want to override the metadata provided in the `SingleLogoutService` attribute or the URL isn't present in the metadata document, you can configure it in the manifest under the `Logout` property. The `BindingType` value will be set to `Http-Redirect`.
-The application usually provides this URL in the metadata document under the `AssertionConsumerServiceUrl` attribute, as shown below:
+The application usually provides this URL in the metadata document under the `AssertionConsumerServiceUrl` attribute, as shown in the following example:
```xml <IDPSSODescriptor WantAuthnRequestsSigned="false" WantAssertionsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
The application usually provides this URL in the metadata document under the `As
</IDPSSODescriptor> ```
-Using the SAML test application as an example, you'd, leave `logoutUrl` set to `https://samltestapp2.azurewebsites.net/logout`:
+Using the SAML test application as an example, you'd leave `logoutUrl` set to `https://samltestapp2.azurewebsites.net/logout`:
```json "logoutUrl": "https://samltestapp2.azurewebsites.net/logout", ``` > [!NOTE]
-> If you choose to configure the reply URL and logout URL in the application manifest without populating the application's metadata endpoint via the `samlMetadataUrl` property, Azure AD B2C will not validate the SAML request signature, nor will it encrypt the SAML response.
+> If you choose to configure the reply URL and logout URL in the application manifest without populating the application's metadata endpoint via the `samlMetadataUrl` property, Azure AD B2C won't validate the SAML request signature. It won't encrypt the SAML response either.
## Configure Azure AD B2C as a SAML IdP in your SAML application The last step is to enable Azure AD B2C as a SAML IdP in your SAML application. Each application is different and the steps vary. Consult your app's documentation for details.
-The metadata can be configured in your application as *static metadata* or *dynamic metadata*. In static mode, copy all or part of the metadata from the Azure AD B2C policy metadata. In dynamic mode, provide the URL to the metadata and to allow your application to read the metadata dynamically.
+The metadata can be configured in your application as *static metadata* or *dynamic metadata*. In static mode, copy all or part of the metadata from the Azure AD B2C policy metadata. In dynamic mode, provide the URL to the metadata and allow your application to read the metadata dynamically.
-Some or all the following are typically required:
+Some or all of the following are typically required:
* **Metadata**: Use the format `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/Samlp/metadata`.
-* **Issuer**: The SAML request `issuer` value must match one of the URIs configured in the `identifierUris` element of the application registration manifest. If the SAML request `issuer` name doesn't exist in the `identifierUris` element, [add it to the application registration manifest](#add-the-identifier). For example, `https://contoso.onmicrosoft.com/app-name`.
-* **Login Url/SAML endpoint/SAML Url**: Check the value in the Azure AD B2C SAML policy metadata file for the `<SingleSignOnService>` XML element.
+* **Issuer**: The SAML request's `issuer` value must match one of the URIs configured in the `identifierUris` element of the application registration manifest. If the SAML request's `issuer` name doesn't exist in the `identifierUris` element, [add it to the application registration manifest](#add-the-identifier). For example: `https://contoso.onmicrosoft.com/app-name`.
+* **Login URL, SAML endpoint, SAML URL**: Check the value in the Azure AD B2C SAML policy metadata file for the `<SingleSignOnService>` XML element.
* **Certificate**: This certificate is *B2C_1A_SamlIdpCert*, but without the private key. To get the public key of the certificate:
- 1. Go to the metadata URL specified above.
+ 1. Go to the metadata URL specified earlier.
1. Copy the value in the `<X509Certificate>` element. 1. Paste it into a text file. 1. Save the text file as a *.cer* file. ### Test with the SAML test app
-You can use our [SAML Test Application][samltest] to test your configuration:
+You can use our [SAML test application][samltest] to test your configuration:
* Update the tenant name.
-* Update the policy name, for example *B2C_1A_signup_signin_saml*.
-* Specify this issuer URI. Use one of the URIs found in the `identifierUris` element in the application registration manifest, for example `https://contoso.onmicrosoft.com/app-name`.
+* Update the policy name. For example, use *B2C_1A_signup_signin_saml*.
+* Specify the issuer URI. Use one of the URIs found in the `identifierUris` element in the application registration manifest. For example, use `https://contoso.onmicrosoft.com/app-name`.
-Select **Login** and you should be presented with a user sign-in screen. Upon sign-in, a SAML response is issued back to the sample application.
+Select **Login**, and a user sign-in screen should appear. After you sign in, a SAML response will be issued back to the sample application.
## Supported and unsupported SAML modalities The following SAML application scenarios are supported via your own metadata endpoint:
-* Multiple logout URLs or POST binding for logout URL in the application/service principal object.
-* Specify a signing key to verify relying party (RP) requests in the application/service principal object.
-* Specify a token encryption key in the application/service principal object.
-* Identity provider-initiated sign-on, where the identity provider is Azure AD B2C.
+* Specify multiple logout URLs or POST binding for the logout URL in the application or service principal object.
+* Specify a signing key to verify relying party requests in the application or service principal object.
+* Specify a token encryption key in the application or service principal object.
+* Specify IdP-initiated sign-on, where the identity provider is Azure AD B2C.
## Next steps -- Get the SAML test web app from [Azure AD B2C GitHub community repo](https://github.com/azure-ad-b2c/saml-sp-tester).-- See the [options for registering a SAML application in Azure AD B2C](saml-service-provider-options.md)
+- Get the SAML test web app from the [Azure AD B2C GitHub community repo](https://github.com/azure-ad-b2c/saml-sp-tester).
+- See the [options for registering a SAML application in Azure AD B2C](saml-service-provider-options.md).
<!-- LINKS - External --> [samltest]: https://aka.ms/samltestapp
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 08/25/2021 Last updated : 08/30/2021
Requires one string argument. Returns the string, but with any diacritical chara
| **source** |Required |String | Usually a first name or last name attribute. |
+| Character with Diacritic | Normalized character | Character with Diacritic | Normalized character |
+| | | | |
+| ä, à, â, ã, å, á, ą, ă | a | Ä, À, Â, Ã, Å, Á, Ą, Ă | A |
+| æ | ae | Æ | AE |
+| ç, č, ć | c | Ç, Č, Ć | C |
+| ─Å | d | ─Ä | D |
+| ë, è, é, ê, ę, ě, ė | e | Ë, È, É, Ê, Ę, Ě, Ė | E |
+| ─ƒ | g | ─₧ | G |
+| Ï, Î, Ì, Í, İ | I | ï, î, ì, í, ı | i |
+| ľ, ł | l | Ł, Ľ | L |
+| ñ, ń, ň | n | Ñ, Ń, Ň | N |
+| ö, ò, ő, õ, ô, ó | o | Ö, Ò, Ő, Õ, Ô, Ó | O |
+| ├╕ | oe | ├ÿ | OE |
+| ř | r | Ř | R |
+| ß | ss | | |
+| š, ś, ș, ş | s | Š, Ś, Ș, Ş | S |
+| ť, ț | t | Ť, Ț | T |
+| ü, ù, û, ú, ů, ű | u | Ü, Ù, Û, Ú, Ů, Ű | U |
+| ÿ, ý | y | Ÿ, Ý | Y |
+| ┼║, ┼╛, ┼╝ | z | ┼╣, ┼╜, ┼╗ | Z |
++ #### Remove diacritics from a string Example: You need to replace characters containing accent marks with equivalent characters that don't contain accent marks.
active-directory Howto Mfaserver Adfs 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfaserver-adfs-2.md
Previously updated : 07/11/2018 Last updated : 08/27/2021
This article is for organizations that are federated with Azure Active Directory, and want to secure resources that are on-premises or in the cloud. Protect your resources by using the Azure Multi-Factor Authentication Server and configuring it to work with AD FS so that two-step verification is triggered for high-value end points.
-This documentation covers using the Azure Multi-Factor Authentication Server with AD FS 2.0. For information about AD FS, see [Securing cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server 2012 R2 AD FS](howto-mfaserver-adfs-2012.md).
+This documentation covers using the Azure Multi-Factor Authentication Server with AD FS 2.0. For information about AD FS, see [Securing cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server](howto-mfaserver-adfs-windows-server.md).
> [!IMPORTANT] > As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
active-directory Howto Mfaserver Adfs Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfaserver-adfs-windows-server.md
+
+ Title: Azure MFA Server with AD FS in Windows Server - Azure Active Directory
+description: This article describes how to get started with Azure Multi-Factor Authentication and AD FS in Windows Server 2016.
+++++ Last updated : 08/25/2021++++++++
+# Configure Azure Multi-Factor Authentication Server to work with AD FS in Windows Server
+
+If you use Active Directory Federation Services (AD FS) and want to secure cloud or on-premises resources, you can configure Azure Multi-Factor Authentication Server to work with AD FS. This configuration triggers two-step verification for high-value endpoints.
+
+In this article, we discuss using Azure Multi-Factor Authentication Server with AD FS beginning with Windows Server 2016. For more information, read about how to [secure cloud and on-premises resources by using Azure Multi-Factor Authentication Server with AD FS 2.0](howto-mfaserver-adfs-2.md).
+
+> [!IMPORTANT]
+> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+>
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+>
+> If you use cloud-based MFA, see [Securing cloud resources with Azure AD Multi-Factor Authentication and AD FS](howto-mfa-adfs.md).
+>
+> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
+
+## Secure Windows Server AD FS with Azure Multi-Factor Authentication Server
+
+When you install Azure Multi-Factor Authentication Server, you have the following options:
+
+* Install Azure Multi-Factor Authentication Server locally on the same server as AD FS
+* Install the Azure Multi-Factor Authentication adapter locally on the AD FS server, and then install Multi-Factor Authentication Server on a different computer
+
+Before you begin, be aware of the following information:
+
+* You don't have to install Azure Multi-Factor Authentication Server on your AD FS server. However, you must install the Multi-Factor Authentication adapter for AD FS on a Windows Server 2012 R2 or Windows Server 2016 that is running AD FS. You can install the server on a different computer if you install the AD FS adapter separately on your AD FS federation server. See the following procedures to learn how to install the adapter separately.
+* If your organization is using text message or mobile app verification methods, the strings defined in Company Settings contain a placeholder, <$*application_name*$>. In MFA Server v7.1, you can provide an application name that replaces this placeholder. In v7.0 or older, this placeholder is not automatically replaced when you use the AD FS adapter. For those older versions, remove the placeholder from the appropriate strings when you secure AD FS.
+* The account that you use to sign in must have user rights to create security groups in your Active Directory service.
+* The Multi-Factor Authentication AD FS adapter installation wizard creates a security group called PhoneFactor Admins in your instance of Active Directory. It then adds the AD FS service account of your federation service to this group. Verify that the PhoneFactor Admins group was created on your domain controller, and that the AD FS service account is a member of this group. If necessary, manually add the AD FS service account to the PhoneFactor Admins group on your domain controller.
+* For information about installing the Web Service SDK with the user portal, see [deploying the user portal for Azure Multi-Factor Authentication Server.](howto-mfaserver-deploy-userportal.md)
+
+### Install Azure Multi-Factor Authentication Server locally on the AD FS server
+
+1. Download and install Azure Multi-Factor Authentication Server on your AD FS server. For installation information, read about [getting started with Azure Multi-Factor Authentication Server](howto-mfaserver-deploy.md).
+2. In the Azure Multi-Factor Authentication Server management console, click the **AD FS** icon. Select the options **Allow user enrollment** and **Allow users to select method**.
+3. Select any additional options you'd like to specify for your organization.
+4. Click **Install AD FS Adapter**.
+
+ ![Install the ADFS Adapter from the MFA Server console](./media/howto-mfaserver-adfs-2012/server.png)
+
+5. If the Active Directory window is displayed, that means two things. Your computer is joined to a domain, and the Active Directory configuration for securing communication between the AD FS adapter and the Multi-Factor Authentication service is incomplete. Click **Next** to automatically complete this configuration, or select the **Skip automatic Active Directory configuration and configure settings manually** check box. Click **Next**.
+6. If the Local Group windows is displayed, that means two things. Your computer is not joined to a domain, and the local group configuration for securing communication between the AD FS adapter and the Multi-Factor Authentication service is incomplete. Click **Next** to automatically complete this configuration, or select the **Skip automatic Local Group configuration and configure settings manually** check box. Click **Next**.
+7. In the installation wizard, click **Next**. Azure Multi-Factor Authentication Server creates the PhoneFactor Admins group and adds the AD FS service account to the PhoneFactor Admins group.
+8. On the **Launch Installer** page, click **Next**.
+9. In the Multi-Factor Authentication AD FS adapter installer, click **Next**.
+10. Click **Close** when the installation is finished.
+11. When the adapter has been installed, you must register it with AD FS. Open Windows PowerShell and run the following command:
+
+ `C:\Program Files\Multi-Factor Authentication Server\Register-MultiFactorAuthenticationAdfsAdapter.ps1`
+
+12. To use your newly registered adapter, edit the global authentication policy in AD FS. In the AD FS management console, go to the **Authentication Policies** node. In the **Multi-factor Authentication** section, click the **Edit** link next to the **Global Settings** section. In the **Edit Global Authentication Policy** window, select **Multi-Factor Authentication** as an additional authentication method, and then click **OK**. The adapter is registered as WindowsAzureMultiFactorAuthentication. Restart the AD FS service for the registration to take effect.
+
+![Edit global authentication policy](./media/howto-mfaserver-adfs-2012/global.png)
+
+At this point, Multi-Factor Authentication Server is set up to be an additional authentication provider to use with AD FS.
+
+## Install a standalone instance of the AD FS adapter by using the Web Service SDK
+
+1. Install the Web Service SDK on the server that is running Multi-Factor Authentication Server.
+2. Copy the following files from the \Program Files\Multi-Factor Authentication Server directory to the server on which you plan to install the AD FS adapter:
+ * MultiFactorAuthenticationAdfsAdapterSetup64.msi
+ * Register-MultiFactorAuthenticationAdfsAdapter.ps1
+ * Unregister-MultiFactorAuthenticationAdfsAdapter.ps1
+ * MultiFactorAuthenticationAdfsAdapter.config
+3. Run the MultiFactorAuthenticationAdfsAdapterSetup64.msi installation file.
+4. In the Multi-Factor Authentication AD FS adapter installer, click **Next** to start the installation.
+5. Click **Close** when the installation is finished.
+
+## Edit the MultiFactorAuthenticationAdfsAdapter.config file
+
+Follow these steps to edit the MultiFactorAuthenticationAdfsAdapter.config file:
+
+1. Set the **UseWebServiceSdk** node to **true**.
+2. Set the value for **WebServiceSdkUrl** to the URL of the Multi-Factor Authentication Web Service SDK. For example: *https:\/\/contoso.com/\<certificatename>/MultiFactorAuthWebServiceSdk/PfWsSdk.asmx*, Where *\<certificatename>* is the name of your certificate.
+3. Edit the Register-MultiFactorAuthenticationAdfsAdapter.ps1 script by adding `-ConfigurationFilePath &lt;path&gt;` to the end of the `Register-AdfsAuthenticationProvider` command, where *&lt;path&gt;* is the full path to the MultiFactorAuthenticationAdfsAdapter.config file.
+
+### Configure the Web Service SDK with a username and password
+
+There are two options for configuring the Web Service SDK. The first is with a username and password, the second is with a client certificate. Follow these steps for the first option, or skip ahead for the second.
+
+1. Set the value for **WebServiceSdkUsername** to an account that is a member of the PhoneFactor Admins security group. Use the &lt;domain&gt;&#92;&lt;user name&gt; format.
+2. Set the value for **WebServiceSdkPassword** to the appropriate account password. The special character "&" cannot be used in the **WebServiceSdkPassword**.
+
+### Configure the Web Service SDK with a client certificate
+
+If you don't want to use a username and password, follow these steps to configure the Web Service SDK with a client certificate.
+
+1. Obtain a client certificate from a certificate authority for the server that is running the Web Service SDK. Learn how to [obtain client certificates](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770328(v=ws.10)).
+2. Import the client certificate to the local computer personal certificate store on the server that is running the Web Service SDK. Make sure that the certificate authority's public certificate is in Trusted Root Certificates certificate store.
+3. Export the public and private keys of the client certificate to a .pfx file.
+4. Export the public key in Base64 format to a .cer file.
+5. In Server Manager, verify that the Web Server (IIS)\Web Server\Security\IIS Client Certificate Mapping Authentication feature is installed. If it is not installed, select **Add Roles and Features** to add this feature.
+6. In IIS Manager, double-click **Configuration Editor** in the website that contains the Web Service SDK virtual directory. It is important to select the website, not the virtual directory.
+7. Go to the **system.webServer/security/authentication/iisClientCertificateMappingAuthentication** section.
+8. Set enabled to **true**.
+9. Set oneToOneCertificateMappingsEnabled to **true**.
+10. Click the **...** button next to oneToOneMappings, and then click the **Add** link.
+11. Open the Base64 .cer file you exported earlier. Remove *--BEGIN CERTIFICATE--*, *--END CERTIFICATE--*, and any line breaks. Copy the resulting string.
+12. Set certificate to the string copied in the preceding step.
+13. Set enabled to **true**.
+14. Set userName to an account that is a member of the PhoneFactor Admins security group. Use the &lt;domain&gt;&#92;&lt;user name&gt; format.
+15. Set the password to the appropriate account password, and then close Configuration Editor.
+16. Click the **Apply** link.
+17. In the Web Service SDK virtual directory, double-click **Authentication**.
+18. Verify that ASP.NET Impersonation and Basic Authentication are set to **Enabled**, and that all other items are set to **Disabled**.
+19. In the Web Service SDK virtual directory, double-click **SSL Settings**.
+20. Set Client Certificates to **Accept**, and then click **Apply**.
+21. Copy the .pfx file you exported earlier to the server that is running the AD FS adapter.
+22. Import the .pfx file to the local computer personal certificate store.
+23. Right-click and select **Manage Private Keys**, and then grant read access to the account you used to sign in to the AD FS service.
+24. Open the client certificate and copy the thumbprint from the **Details** tab.
+25. In the MultiFactorAuthenticationAdfsAdapter.config file, set **WebServiceSdkCertificateThumbprint** to the string copied in the previous step.
+
+Finally, to register the adapter, run the \Program Files\Multi-Factor Authentication Server\Register-MultiFactorAuthenticationAdfsAdapter.ps1 script in PowerShell. The adapter is registered as WindowsAzureMultiFactorAuthentication. Restart the AD FS service for the registration to take effect.
+
+## Secure Azure AD resources using AD FS
+
+To secure your cloud resource, set up a claims rule so that Active Directory Federation Services emits the multipleauthn claim when a user performs two-step verification successfully. This claim is passed on to Azure AD. Follow this procedure to walk through the steps:
+
+1. Open AD FS Management.
+2. On the left, select **Relying Party Trusts**.
+3. Right-click on **Microsoft Office 365 Identity Platform** and select **Edit Claim Rules…**
+
+ ![Edit claim rules in the ADFS console](./media/howto-mfaserver-adfs-2012/trustedip1.png)
+
+4. On Issuance Transform Rules, click **Add Rule.**
+
+ ![Edit transform rules in the ADFS console](./media/howto-mfaserver-adfs-2012/trustedip2.png)
+
+5. On the Add Transform Claim Rule Wizard, select **Pass Through or Filter an Incoming Claim** from the drop-down and click **Next**.
+
+ ![Add transform claim rule wizard](./media/howto-mfaserver-adfs-2012/trustedip3.png)
+
+6. Give your rule a name.
+7. Select **Authentication Methods References** as the Incoming claim type.
+8. Select **Pass through all claim values**.
+
+ ![Add Transform Claim Rule Wizard](./media/howto-mfaserver-adfs-2012/configurewizard.png)
+
+9. Click **Finish**. Close the AD FS Management console.
+
+## Troubleshooting logs
+
+To help with troubleshooting issues with the MFA Server AD FS Adapter use the steps that follow to enable additional logging.
+
+1. In the MFA Server interface, open the AD FS section, and check the **Enable logging** checkbox.
+2. On each AD FS server, use **regedit.exe** to create string value registry key `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Positive Networks\PhoneFactor\InstallPath` with value `C:\Program Files\Multi-Factor Authentication Server\` (or other directory of your choice). **Note, the trailing backslash is important.**
+3. Create `C:\Program Files\Multi-Factor Authentication Server\Logs` directory (or other directory as referenced in **Step 2**).
+4. Grant Modify access on the Logs directory to the AD FS service account.
+5. Restart the AD FS service.
+6. Verify that `MultiFactorAuthAdfsAdapter.log` file was created in the Logs directory.
+
+## Related topics
+
+For troubleshooting help, see the [Azure Multi-Factor Authentication FAQs](multi-factor-authentication-faq.yml)
active-directory Multi Factor Authentication Get Started Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/multi-factor-authentication-get-started-adfs.md
Previously updated : 11/21/2019 Last updated : 08/27/2021
Caveats with app passwords for federated users:
For information on setting up either Azure AD Multi-Factor Authentication or the Azure Multi-Factor Authentication Server with AD FS, see the following articles: * [Secure cloud resources using Azure AD Multi-Factor Authentication and AD FS](howto-mfa-adfs.md)
-* [Secure cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server 2012 R2 AD FS](howto-mfaserver-adfs-2012.md)
+* [Secure cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server](howto-mfaserver-adfs-windows-server.md)
* [Secure cloud and on-premises resources using Azure Multi-Factor Authentication Server with AD FS 2.0](howto-mfaserver-adfs-2.md)
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Previously updated : 07/26/2021 Last updated : 08/25/2021
To complete this tutorial, you need the following resources and privileges:
* If needed, [complete the previous tutorial to enable Azure AD SSPR](tutorial-enable-sspr.md). * An existing on-premises AD DS environment configured with a current version of Azure AD Connect. * If needed, configure Azure AD Connect using the [Express](../hybrid/how-to-connect-install-express.md) or [Custom](../hybrid/how-to-connect-install-custom.md) settings.
- * To use password writeback, your Domain Controllers must be Windows Server 2012 or later.
+ * To use password writeback, your Domain Controllers must be Windows Server 2016 or later.
## Configure account permissions for Azure AD Connect
-Azure AD Connect lets you synchronize users, groups, and credential between an on-premises AD DS environment and Azure AD. You typically install Azure AD Connect on a Windows Server 2012 or later computer that's joined to the on-premises AD DS domain.
+Azure AD Connect lets you synchronize users, groups, and credential between an on-premises AD DS environment and Azure AD. You typically install Azure AD Connect on a Windows Server 2016 or later computer that's joined to the on-premises AD DS domain.
To correctly work with SSPR writeback, the account specified in Azure AD Connect must have the appropriate permissions and options set. If you're not sure which account is currently in use, open Azure AD Connect and select the **View current configuration** option. The account that you need to add permissions to is listed under **Synchronized Directories**. The following permissions and options must be set on the account:
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
For customers with access to [Identity Protection](../identity-protection/overvi
## User risk
-For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), user risk can be evaluated as part of a Conditional Access policy. User risk represents the probability that a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md#user-risk) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
+For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), user risk can be evaluated as part of a Conditional Access policy. User risk represents the probability that a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md#user-linked-detections) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
## Device platforms
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
# Conditional Access: User risk-based Conditional Access
-Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md#user-risk).
+Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md#user-linked-detections).
There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
npm install @azure/msal-browser @azure/msal-react
## Next steps
-For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the Microsoft Graph API to get user profile data:
> [!div class="nextstepaction"]
-> [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
+> [Tutorial: Sign in users and call Microsoft Graph](tutorial-v2-react.md)
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-saml-tokens.md
This is a sample of a typical SAML token.
## Next steps
-* To learn more about managing token lifetime policy using the Microsoft Graph API, see the [Azure AD policy resource overview](/graph/api/resources/policy).
+* To learn more about managing token lifetime policy using the Microsoft Graph API, see the [Azure AD policy resource overview](/graph/api/resources/policy-overview).
* Add [custom and optional claims](active-directory-optional-claims.md) to the tokens for your application. * Use [Single Sign-On (SSO) with SAML](single-sign-on-saml-protocol.md).
-* Use the [Azure Single Sign-Out SAML protocol](single-sign-out-saml-protocol.md)
+* Use the [Azure Single Sign-Out SAML protocol](single-sign-out-saml-protocol.md)
active-directory Scenario Spa App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-app-configuration.md
In an MSAL library, the application registration information is passed as config
# [JavaScript (MSAL.js v2)](#tab/javascript2) ```javascript
+import * as Msal from "@azure/msal-browser"; // if using CDN, 'Msal' will be available in global scope
+ // Configuration object constructed. const config = { auth: {
const config = {
}; // create PublicClientApplication instance
-const publicClientApplication = new PublicClientApplication(config);
+const publicClientApplication = new Msal.PublicClientApplication(config);
``` For more information on the configurable options, see [Initializing application with MSAL.js](msal-js-initializing-client-applications.md).
For more information on the configurable options, see [Initializing application
# [JavaScript (MSAL.js v1)](#tab/javascript1) ```javascript
+import * as Msal from "msal"; // if using CDN, 'Msal' will be available in global scope
+ // Configuration object constructed. const config = { auth: {
const config = {
}; // create UserAgentApplication instance
-const userAgentApplication = new UserAgentApplication(config);
+const userAgentApplication = new Msal.UserAgentApplication(config);
``` For more information on the configurable options, see [Initializing application with MSAL.js](msal-js-initializing-client-applications.md).
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 07/19/2021 Last updated : 08/30/2021
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `redirect_uri` | required | The same redirect_uri value that was used to acquire the authorization_code. | | `grant_type` | required | Must be `authorization_code` for the authorization code flow. | | `code_verifier` | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. You shouldn't use the application secret in a native app or single page app because client_secrets can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. Like all parameters discussed here, the client secret must be URL-encoded before being sent, a step usually performed by the SDK. For more information on uri encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). |
+| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. You shouldn't use the application secret in a native app or single page app because client_secrets can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. Like all parameters discussed here, the client secret must be URL-encoded before being sent, a step usually performed by the SDK. For more information on uri encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
### Request an access token with a certificate credential
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Previously updated : 06/30/2021 Last updated : 08/30/2021
curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=
| `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | | `client_id` | Required | The application ID that's assigned to your app. You can find this information in the portal where you registered your app. | | `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). |
-| `client_secret` | Required | The client secret that you generated for your app in the app registration portal. The client secret must be URL-encoded before being sent. |
+| `client_secret` | Required | The client secret that you generated for your app in the app registration portal. The client secret must be URL-encoded before being sent. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
| `grant_type` | Required | Must be set to `client_credentials`. | ### Second case: Access token request with a certificate
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Previously updated : 07/16/2021 Last updated : 08/30/2021
When using a shared secret, a service-to-service access token request contains t
| | | | | `grant_type` | Required | The type of token request. For a request using a JWT, the value must be `urn:ietf:params:oauth:grant-type:jwt-bearer`. | | `client_id` | Required | The application (client) ID that [the Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page has assigned to your app. |
-| `client_secret` | Required | The client secret that you generated for your app in the Azure portal - App registrations page. |
+| `client_secret` | Required | The client secret that you generated for your app in the Azure portal - App registrations page. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
| `assertion` | Required | The access token that was sent to the middle-tier API. This token must have an audience (`aud`) claim of the app making this OBO request (the app denoted by the `client-id` field). Applications cannot redeem a token for a different app (so e.g. if a client sends an API a token meant for MS Graph, the API cannot redeem it using OBO. It should instead reject the token). | | `scope` | Required | A space separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). | | `requested_token_use` | Required | Specifies how the request should be processed. In the OBO flow, the value must be set to `on_behalf_of`. |
A service-to-service request for a SAML assertion contains the following paramet
| grant_type |required | The type of the token request. For a request that uses a JWT, the value must be **urn:ietf:params:oauth:grant-type:jwt-bearer**. | | assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. |
-| client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. |
+| client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). For example, 'https://testapp.contoso.com/user_impersonation openid' | | requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | | requested_token_type | required | Specifies the type of token requested. The value can be **urn:ietf:params:oauth:token-type:saml2** or **urn:ietf:params:oauth:token-type:saml1** depending on the requirements of the accessed resource. |
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/identity-providers.md
Previously updated : 07/26/2021 Last updated : 08/30/2021
An *identity provider* creates, maintains, and manages identity information while providing authentication services to applications. When sharing your apps and resources with external users, Azure AD is the default identity provider for sharing. This means when you invite external users who already have an Azure AD or Microsoft account, they can automatically sign in without further configuration on your part.
-In addition to Azure AD accounts, External Identities offers a variety of identity providers.
+External Identities offers a variety of identity providers.
+
+- **Azure Active Directory accounts**: Guest users can use their Azure AD work or school accounts to redeem your B2B collaboration invitations or complete your sign-up user flows. [Azure Active Directory](azure-ad-account.md) is one of the allowed identity providers by default. No additional configuration is needed to make this identity provider available for user flows.
- **Microsoft accounts**: Guest users can use their own personal Microsoft account (MSA) to redeem your B2B collaboration invitations. When setting up a self-service sign-up user flow, you can add [Microsoft Account](microsoft-account.md) as one of the allowed identity providers. No additional configuration is needed to make this identity provider available for user flows.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
The new simplified user flow experience offers feature parity with preview featu
**Service category:** Identity Protection **Product capability:** Identity Security & Protection
-This new detection serves as an ad-hoc method to allow our security teams to notify you and protect your users by raising their session risk to a High risk when we observe an attack happening. The detection will also mark the associated sign-ins as risky. This detection follows the existing Azure Active Directory threat intelligence for user risk detection to provide complete coverage of the various attacks observed by Microsoft security teams. [Learn more](../identity-protection/concept-identity-protection-risks.md#user-risk).
+This new detection serves as an ad-hoc method to allow our security teams to notify you and protect your users by raising their session risk to a High risk when we observe an attack happening. The detection will also mark the associated sign-ins as risky. This detection follows the existing Azure Active Directory threat intelligence for user risk detection to provide complete coverage of the various attacks observed by Microsoft security teams. [Learn more](../identity-protection/concept-identity-protection-risks.md#user-linked-detections).
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-phs.md
Password hash synchronization helps by reducing the number of passwords, your us
* Improve the productivity of your users. * Reduce your helpdesk costs.
-Password Hash Sync also enables [leaked credential detection](../identity-protection/concept-identity-protection-risks.md#user-risk) for your hybrid accounts. Microsoft works alongside dark web researchers and law enforcement agencies to find publicly available username/password pairs. If any of these pairs match those of our users, the associated account is moved to high risk.
+Password Hash Sync also enables [leaked credential detection](../identity-protection/concept-identity-protection-risks.md#user-linked-detections) for your hybrid accounts. Microsoft works alongside dark web researchers and law enforcement agencies to find publicly available username/password pairs. If any of these pairs match those of our users, the associated account is moved to high risk.
>[!NOTE] > Only new leaked credentials found after you enable PHS will be processed against your tenant. Verifying against previously found credential pairs is not performed.
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 07/16/2021 Last updated : 08/30/2021
# What is risk?
-Risk detections in Azure AD Identity Protection include any identified suspicious actions related to user accounts in the directory.
+Risk detections in Azure AD Identity Protection include any identified suspicious actions related to user accounts in the directory. Risk detections (both user and sign-in linked) contribute to the overall user risk score that is found in the Risky Users report.
Identity Protection provides organizations access to powerful resources to see and respond quickly to these suspicious actions.
Identity Protection provides organizations access to powerful resources to see a
## Risk types and detection
-There are two types of risk **User** and **Sign-in** and two types of detection or calculation **Real-time** and **Offline**.
+Risk can be detected at the **User** and **Sign-in** level and two types of detection or calculation **Real-time** and **Offline**.
Real-time detections may not show up in reporting for five to ten minutes. Offline detections may not show up in reporting for two to twenty-four hours.
-### User risk
+### User-linked detections
-A user risk represents the probability that a given identity or account is compromised.
+Risky activity can be detected for a user that is not linked to a specific malicious sign-in but to the user itself. These risk detections are calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources.
These risks are calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources.
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Previously updated : 8/30/2021 Last updated : 8/27/2021
Instead, if you want to use an existing web application in your Azure AD tenant,
integration](https://docs.datawiza.com/step-by-step/step3.html). [Deploy DAB with Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html). A sample docker image `docker-compose.yml` file is provided for you to download and use. [Log in to the container registry](https://docs.datawiza.com/step-by-step/step3.html#important-step) to download the images of DAB and the header-based application. ```YML
-
- datawiza-access-broker:\
- image: registry.gitlab.com/datawiza/access-broker\
- container\_name: datawiza-access-broker\
- restart: always\
- ports:\
- - \"9772:9772\"\
- environment:\
- PROVISIONING\_KEY: \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\
- PROVISIONING\_SECRET: \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\
- \
- header-based-app:\
- image: registry.gitlab.com/datawiza/header-based-app\
- restart: always\
- ports:\
- - \"3001:3001\"
+
+ datawiza-access-broker:
+ image: registry.gitlab.com/datawiza/access-broker
+ container_name: datawiza-access-broker
+ restart: always
+ ports:
+ - "9772:9772"
+ environment:
+ PROVISIONING_KEY: #############################################
+ PROVISIONING_SECRET: ##############################################
+
+ header-based-app:
+ image: registry.gitlab.com/datawiza/header-based-app
+ restart: always
+ ports:
+ - "3001:3001"
``` 2. After executing `docker-compose -f docker-compose.yml up`, the
active-directory Concur Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/concur-tutorial.md
Previously updated : 12/26/2020 Last updated : 08/26/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Concur supports **SP** initiated SSO
-* Concur supports **Just In Time** user provisioning
+* Concur supports **SP** initiated SSO.
+* Concur supports **Just In Time** user provisioning.
+* Concur supports [Automated user provisioning](concur-provisioning-tutorial.md).
## Adding Concur from the gallery
To configure single sign-on on **Concur** side, you need to send the downloaded
In this section, a user called B.Simon is created in Concur. Concur supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Concur, a new one is created after authentication.
+Concur also supports automatic user provisioning, you can find more details [here](./concur-provisioning-tutorial.md) on how to configure automatic user provisioning.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Contentful Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/contentful-tutorial.md
Previously updated : 05/13/2021 Last updated : 08/27/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Contentful supports **SP and IDP** initiated SSO. * Contentful supports **Just In Time** user provisioning.
+* Contentful supports [Automated user provisioning](contentful-provisioning-tutorial.md).
> [!NOTE] > The identifier of this application is a fixed string value so only one instance can be configured in one tenant.
If that doesn't work, reach out to the [Contentful support team](mailto:support@
In this section, a user called B.Simon is created in Contentful. Contentful supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Contentful, a new one is created after authentication.
+Contentful also supports automatic user provisioning, you can find more details [here](./contentful-provisioning-tutorial.md) on how to configure automatic user provisioning.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Cornerstone Ondemand Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-tutorial.md
Previously updated : 06/24/2021 Last updated : 08/27/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Cornerstone supports **SP** initiated SSO.
+* Cornerstone supports [Automated user provisioning](cornerstone-ondemand-provisioning-tutorial.md).
+ * If you are integrating one or multiple products from this particular list then you should use this Cornerstone Single Sign-On app from the Gallery. We offer solutions for :
To configure SSO in Cornerstone, you need to reach out to your Cornerstone imple
In this section, you create a user called Britta Simon in Cornerstone. Please work with your Cornerstone implementation project team to add the users in Cornerstone. Users must be created and activated before you use single sign-on.
+Cornerstone Single Sign-On also supports automatic user provisioning, you can find more details [here](./cornerstone-ondemand-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
active-directory Druva Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/druva-tutorial.md
Previously updated : 06/02/2021 Last updated : 08/23/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Druva supports **IDP** initiated SSO.
+* Druva supports [Automated user provisioning](druva-provisioning-tutorial.md).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, a user called B.Simon is created in Druva. Druva supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Druva, a new one is created after authentication.
+Druva also supports automatic user provisioning, you can find more details [here](./druva-provisioning-tutorial.md) on how to configure automatic user provisioning.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Productive Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/productive-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Productive | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Productive.
++++++++ Last updated : 08/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Productive
+
+In this tutorial, you'll learn how to integrate Productive with Azure Active Directory (Azure AD). When you integrate Productive with Azure AD, you can:
+
+* Control in Azure AD who has access to Productive.
+* Enable your users to be automatically signed-in to Productive with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Productive single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Productive supports **SP and IDP** initiated SSO.
+* Productive supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Productive from the gallery
+
+To configure the integration of Productive into Azure AD, you need to add Productive from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Productive** in the search box.
+1. Select **Productive** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Productive
+
+Configure and test Azure AD SSO with Productive using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Productive.
+
+To configure and test Azure AD SSO with Productive, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Productive SSO](#configure-productive-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Productive test user](#create-productive-test-user)** - to have a counterpart of B.Simon in Productive that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Productive** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
+
+ a. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.productive.io/api/v2/sessions/consume_single_sign_on?account_id=<ID>&app=https://latest.productive.io/public/sso`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.productive.io/public/sso`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Reply URL. Contact [Productive Client support team](mailto:support@productive.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your Productive application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Productive expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![image](common/default-attributes.png)
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Productive.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Productive**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Productive SSO
+
+To configure single sign-on on **Productive** side, you need to send the **App Federation Metadata Url** to [Productive support team](mailto:support@productive.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Productive test user
+
+In this section, a user called Britta Simon is created in Productive. Productive supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Productive, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Productive Sign on URL where you can initiate the login flow.
+
+* Go to Productive Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Productive for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Productive tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Productive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Productive you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Teachme Biz Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/teachme-biz-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Teachme Biz | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Teachme Biz.
++++++++ Last updated : 08/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Teachme Biz
+
+In this tutorial, you'll learn how to integrate Teachme Biz with Azure Active Directory (Azure AD). When you integrate Teachme Biz with Azure AD, you can:
+
+* Control in Azure AD who has access to Teachme Biz.
+* Enable your users to be automatically signed-in to Teachme Biz with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Teachme Biz single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Teachme Biz supports **SP and IDP** initiated SSO.
+
+## Add Teachme Biz from the gallery
+
+To configure the integration of Teachme Biz into Azure AD, you need to add Teachme Biz from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Teachme Biz** in the search box.
+1. Select **Teachme Biz** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Teachme Biz
+
+Configure and test Azure AD SSO with Teachme Biz using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Teachme Biz.
+
+To configure and test Azure AD SSO with Teachme Biz, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Teachme Biz SSO](#configure-teachme-biz-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Teachme Biz test user](#create-teachme-biz-test-user)** - to have a counterpart of B.Simon in Teachme Biz that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Teachme Biz** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://teachme.jp/saml/entity/<GroupID>`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://teachme.jp/saml/consume`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://teachme.jp/<GroupID>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign-on URL. Contact [Teachme Biz Client support team](mailto:support@teachme.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Teachme Biz.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Teachme Biz**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Teachme Biz SSO
+
+To configure single sign-on on **Teachme Biz** side, you need to send the **App Federation Metadata Url** to [Teachme Biz support team](mailto:support@teachme.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Teachme Biz test user
+
+In this section, you create a user called Britta Simon in Teachme Biz. Work with [Teachme Biz support team](mailto:support@teachme.jp) to add the users in the Teachme Biz platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Teachme Biz Sign on URL where you can initiate the login flow.
+
+* Go to Teachme Biz Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Teachme Biz for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Teachme Biz tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Teachme Biz for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Teachme Biz you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/custom-node-configuration.md
The supported Kubelet parameters and accepted values are listed below.
| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. | | `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). Only kubernetes v1.18+. | | `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 MB | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
### Linux OS custom configuration
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr
[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool [autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node [autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
-[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
+[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 6/14/2021 Last updated : 8/30/2021
The following parameters can be leveraged to configure Private DNS Zone.
- "System", which is also the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group. - "None", defaults to public DNS which means AKS will not create a Private DNS Zone (PREVIEW). -- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.
+- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for Azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource ID of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.
- If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions. - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
The following parameters can be leveraged to configure Private DNS Zone.
* The AKS Preview version 0.5.19 or later * The api version 2021-05-01 or later
-To use the fqdn-subdomain feature, you must enable the `EnablePrivateClusterFQDNSubdomain` feature flag on your subscription.
-
-Register the `EnablePrivateClusterFQDNSubdomain` feature flag by using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnablePrivateClusterFQDNSubdomain"
-```
-
-You can check on the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnablePrivateClusterFQDNSubdomain')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ### Create a private AKS cluster with Private DNS Zone ```azurecli-interactive
The Public DNS option can be leveraged to simplify routing options for your Priv
2. If you use both `--enable-public-fqdn` and `--private-dns-zone none`, the cluster will only have a public FQDN. When using this option, no Private DNS Zone is created or used for the name resolution of the FQDN of the API Server. The IP of the API is still private and not publicly routable.
-### Register the `EnablePrivateClusterPublicFQDN` preview feature
-
-To use the new Enable Private Cluster Public FQDN API, you must enable the `EnablePrivateClusterPublicFQDN` feature flag on your subscription.
-
-Register the `EnablePrivateClusterPublicFQDN` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnablePrivateClusterPublicFQDN"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnablePrivateClusterPublicFQDN')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-### Create a private AKS cluster with a Public DNS address
- ```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <private-dns-zone-mode> --enable-public-fqdn ```
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
You can use one minor version older or newer of `kubectl` relative to your *kube
For example, if your *kube-apiserver* is at *1.17*, then you can use versions *1.16* to *1.18* of `kubectl` with that *kube-apiserver*.
+To install or update `kubectl` to the latest version, run:
+ ### [Azure CLI](#tab/azure-cli)
-To install or update your version of `kubectl`, run `az aks install-cli`.
+```azurecli
+az aks install-cli
+```
### [Azure PowerShell](#tab/azure-powershell)
-To install or update your version of `kubectl`, run [Install-AzAksKubectl][install-azakskubectl].
-
+```powershell
+Install-AzAksKubectl -Version latest
+```
## Release and deprecation process
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
[aks-upgrade]: upgrade-cluster.md [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions [preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
[get-azaksversion]: /powershell/module/az.aks/get-azaksversion
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
After successful deployment, you should see your API Management service's **priv
| Virtual IP address | Description | | -- | -- | | **Private virtual IP address** | A load balanced IP address from within the API Management-delegated subnet, over which you can access `gateway`, `portal`, `management`, and `scm` endpoints. |
-| **Public virtual IP address** | Used *mainly* for control plane traffic to `management` endpoint over `port 3443`. Can be locked down to the [ApiManagement][ServiceTags] service tag. In the external VNet configuration, they are also used for runtime API traffic. |
+| **Public virtual IP address** | Used for control plane traffic to `management` endpoint over `port 3443`. Can be locked down to the [ApiManagement][ServiceTags] service tag. In the none and external VNet configurations, they are used for incoming runtime API traffic. They are also used for outgoing runtime traffic on the internet in all VNet configurations. |
![API Management dashboard with an internal VNET configured][api-management-internal-vnet-dashboard]
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnet-framework.md
If you configure an app setting with the same name in App Service and in *web.co
## Deploy multi-project solutions
-When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
+When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PROJECT="<project-name>/<project-name>.csproj"
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnetcore.md
az webapp config set --name <app-name> --resource-group <resource-group-name> --
## Customize build automation
-If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `dotnet restore` to restore NuGet dependencies.
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
## Deploy multi-project solutions
-When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
+When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PROJECT="<project-name>/<project-name>.csproj"
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Otherwise, your deployment method will depend on your archive type:
### Java SE
-To deploy .jar files to Java SE, use the `/api/zipdeploy/` endpoint of the Kudu site. For more information on this API, please see [this documentation](./deploy-zip.md#rest).
+To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
> [!NOTE] > Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](/azure/app-service/faq-app-service-linux#built-in-images) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`). ### Tomcat
-To deploy .war files to Tomcat, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-war-file).
+To deploy .war files to Tomcat, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
::: zone pivot="platform-linux" ### JBoss EAP
-To deploy .war files to JBoss, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-war-file).
+To deploy .war files to JBoss, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application will be deployed to the context root defined in your application's configuration. For example, if the context root of your app is `<context-root>myapp</context-root>`, then you can browse the site at the `/myapp` path: `http://my-app-name.azurewebsites.net/myapp`. If you want you web app to be served in the root path, ensure that your app sets the context root to the root path: `<context-root>/</context-root>`. For more information, see [Setting the context root of a web application](https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch06.html).
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-nodejs.md
zone_pivot_groups: app-service-platform-windows-linux
# Configure a Node.js app for Azure App Service
-Node.js apps must be deployed with all the required NPM dependencies. The App Service deployment engine automatically runs `npm install --production` for you when you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation). If you deploy your files using [FTP/S](deploy-ftp.md), however, you need to upload the required packages manually.
+Node.js apps must be deployed with all the required NPM dependencies. The App Service deployment engine automatically runs `npm install --production` for you when you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy). If you deploy your files using [FTP/S](deploy-ftp.md), however, you need to upload the required packages manually.
This guide provides key concepts and instructions for Node.js developers who deploy to App Service. If you've never used Azure App Service, follow the [Node.js quickstart](quickstart-nodejs.md) and [Node.js with MongoDB tutorial](tutorial-nodejs-mongodb-app.md) first.
app.listen(port, () => {
## Customize build automation
-If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `npm install` without any flags, which includes npm `preinstall` and `postinstall` scripts and also installs `devDependencies`.
process.env.NODE_ENV
## Run Grunt/Bower/Gulp
-By default, App Service build automation runs `npm install --production` when it recognizes a Node.js app is deployed through Git, or through Zip deployment [with build automation enabled](deploy-zip.md#enable-build-automation). If your app requires any of the popular automation tools, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script) to run it.
+By default, App Service build automation runs `npm install --production` when it recognizes a Node.js app is deployed through Git, or through Zip deployment [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy). If your app requires any of the popular automation tools, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script) to run it.
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
When a working Node.js app behaves differently in App Service or has errors, try
> [App Service Linux FAQ](faq-app-service-linux.yml) ::: zone-end+
+Or, see additional resources:
+
+[Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-php.md
if [ -e "$DEPLOYMENT_TARGET/composer.json" ]; then
fi ```
-Commit all your changes and deploy your code using Git, or Zip deploy [with build automation enabled](deploy-zip.md#enable-build-automation). Composer should now be running as part of deployment automation.
+Commit all your changes and deploy your code using Git, or Zip deploy [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy). Composer should now be running as part of deployment automation.
## Run Grunt/Bower/Gulp
-If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation).
+If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
fi
## Customize build automation
-If you deploy your app using Git, or using zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or using zip packages [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `php composer.phar install`.
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
This article describes how [Azure App Service](overview.md) runs Python apps, how you can migrate existing apps to Azure, and how you can customize the behavior of App Service when needed. Python apps must be deployed with all the required [pip](https://pypi.org/project/pip/) modules.
-The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation).
+The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
This guide provides key concepts and instructions for Python developers who use a built-in Linux container in App Service. If you've never used Azure App Service, first follow the [Python quickstart](quickstart-python.md) and [Python with PostgreSQL tutorial](tutorial-python-postgresql-app.md).
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-ruby.md
ENV['WEBSITE_SITE_NAME']
## Customize deployment
-When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation), the deployment engine (Kudu) automatically runs the following post-deployment steps by default:
+When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the deployment engine (Kudu) automatically runs the following post-deployment steps by default:
1. Check if a *Gemfile* exists. 1. Run `bundle clean`.
app-service Deploy Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-best-practices.md
In your script, log in using `az login --service-principal`, providing the princ
### Java
-Use the Kudu [zipdeploy/](deploy-zip.md) API for deploying JAR applications, and [wardeploy/](deploy-zip.md#deploy-war-file) for WAR apps. If you are using Jenkins, you can use those APIs directly in your deployment phase. For more information, see [this article](/azure/developer/jenkins/deploy-to-azure-app-service-using-azure-cli).
+Use the Kudu [zipdeploy/](deploy-zip.md) API for deploying JAR applications, and [wardeploy/](deploy-zip.md#deploy-warjarear-packages) for WAR apps. If you are using Jenkins, you can use those APIs directly in your deployment phase. For more information, see [this article](/azure/developer/jenkins/deploy-to-azure-app-service-using-azure-cli).
### Node
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-zip.md
Title: Deploy code with a ZIP or WAR file
-description: Learn how to deploy your app to Azure App Service with a ZIP file (or a WAR file for Java developers).
+ Title: Deploy files to App Service
+description: Learn to deploy various app packages or discrete libraries, static files, or startup scripts to Azure App Service
Previously updated : 08/12/2019 Last updated : 08/13/2021
-# Deploy your app to Azure App Service with a ZIP or WAR file
+# Deploy files to App Service
-This article shows you how to use a ZIP file or WAR file to deploy your web app to [Azure App Service](overview.md).
+This article shows you how to deploy your code as a ZIP, WAR, JAR, or EAR package to [Azure App Service](overview.md). It also shows how to deploy individual files to App Service, separate from your application package.
-This ZIP file deployment uses the same Kudu service that powers continuous integration-based deployments. Kudu supports the following functionality for ZIP file deployment:
+## Prerequisites
+
+To complete the steps in this article, [create an App Service app](./index.yml), or use an app that you created for another tutorial.
+++
+## Deploy a ZIP package
+
+When you deploy a ZIP package, App Service unpacks its contents in the default path for your app (`D:\home\site\wwwroot` for Windows, `/home/site/wwwroot` for Linux).
+
+This ZIP package deployment uses the same Kudu service that powers continuous integration-based deployments. Kudu supports the following functionality for ZIP package deployment:
- Deletion of files left over from a previous deployment. - Option to turn on the default build process, which includes package restore. - Deployment customization, including running deployment scripts. - Deployment logs. -- A file size limit of 2048 MB.
+- A package size limit of 2048 MB.
For more information, see [Kudu documentation](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file).
-The WAR file deployment deploys your [WAR](https://wikipedia.org/wiki/WAR_(file_format)) file to App Service to run your Java web app. See [Deploy WAR file](#deploy-war-file).
- > [!NOTE]
-> When using `ZipDeploy`, files will only be copied if their timestamps don't match what is already deployed. Generating a zip using a build process that caches outputs can result in faster deployments. See [Deploying from a zip file or url](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url), for more information.
-
-## Prerequisites
+> Files in the ZIP package are copied only if their timestamps don't match what is already deployed. Generating a zip using a build process that caches outputs can result in faster deployments. See [Deploying from a zip file or url](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url), for more information.
-To complete the steps in this article, [create an App Service app](./index.yml), or use an app that you created for another tutorial.
+# [Azure CLI](#tab/cli)
+Deploy a ZIP package to your web app by using the [az webapp deploy](/cli/azure/webapp#az_webapp_deploy) command. The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the files and can be fully customized.
+The following example pushes a ZIP package to your site. Specify the path to your local ZIP package for `--src-path`.
-The above endpoint does not work for Linux App Services at this time. Consider using FTP or the [ZIP deploy API](/azure/app-service/faq-app-service-linux#continuous-integration-and-deployment) instead.
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path <zip-package-path>
+```
-## Deploy ZIP file with Azure CLI
+This command restarts the app after deploying the ZIP package.
-Deploy the uploaded ZIP file to your web app by using the [az webapp deployment source config-zip](/cli/azure/webapp/deployment/source#az_webapp_deployment_source_config_zip) command.
-The following example deploys the ZIP file you uploaded. When using a local installation of Azure CLI, specify the path to your local ZIP file for `--src`.
+The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the site should pull the ZIP from.
```azurecli-interactive
-az webapp deployment source config-zip --resource-group <group-name> --name <app-name> --src clouddrive/<filename>.zip
+az webapp deploy --resource-group <grou-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
+```
+
+# [Azure PowerShell](#tab/powershell)
+
+The following example uses [Publish-AzWebapp](/powershell/module/az.websites/publish-azwebapp) to upload the ZIP package. Replace the placeholders `<group-name>`, `<app-name>`, and `<zip-package-path>`.
+
+```powershell
+Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath <zip-package-path>
+```
+
+# [Kudu API](#tab/api)
+
+The following example uses the cURL tool to deploy a ZIP package. Replace the placeholders `<username>`, `<zip-package-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md).
+
+```bash
+curl -X POST -u <username> --data-binary @"<zip-package-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=zip
```
-This command deploys the files and directories from the ZIP file to your default App Service application folder (`\home\site\wwwroot`) and restarts the app.
+
+The following example uses the `packageUri` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from.
+
+```bash
+curl -X POST -u <username> https://<app-name>.scm.azurewebsites.net/api/publish -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.zip?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}'
+```
+
+# [Kudu UI](#tab/kudu-ui)
+
+In the browser, navigate to `https://<app_name>.scm.azurewebsites.net/ZipDeployUI`.
+
+Upload the ZIP package you created in [Create a project ZIP package](#create-a-project-zip-package) by dragging it to the file explorer area on the web page.
+
+When deployment is in progress, an icon in the top right corner shows you the progress in percentage. The page also shows verbose messages for the operation below the explorer area. When it is finished, the last deployment message should say `Deployment successful`.
+
+The above endpoint does not work for Linux App Services at this time. Consider using FTP or the [ZIP deploy API](/azure/app-service/faq-app-service-linux#continuous-integration-and-deployment) instead.
+
+--
-## Enable build automation
+## Enable build automation for ZIP deploy
-By default, the deployment engine assumes that a ZIP file is ready to run as-is and doesn't run any build automation. To enable the same build automation as in a [Git deployment](deploy-local-git.md), set the `SCM_DO_BUILD_DURING_DEPLOYMENT` app setting by running the following command in the [Cloud Shell](https://shell.azure.com):
+By default, the deployment engine assumes that a ZIP package is ready to run as-is and doesn't run any build automation. To enable the same build automation as in a [Git deployment](deploy-local-git.md), set the `SCM_DO_BUILD_DURING_DEPLOYMENT` app setting by running the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true
az webapp config appsettings set --resource-group <group-name> --name <app-name>
For more information, see [Kudu documentation](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url).
-## Deploy WAR file
+## Deploy WAR/JAR/EAR packages
-To deploy a WAR file to App Service, send a POST request to `https://<app-name>.scm.azurewebsites.net/api/wardeploy`. The POST request must contain the .war file in the message body. The deployment credentials for your app are provided in the request by using HTTP BASIC authentication.
+You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API.
-Always use `/api/wardeploy` when deploying WAR files. This API will expand your WAR file and place it on the shared file drive. using other deployment APIs may result in inconsistent behavior.
+The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended.
-For the HTTP BASIC authentication, you need your App Service deployment credentials. To see how to set your deployment credentials, see [Set and reset user-level credentials](deploy-configure-credentials.md#userscope).
+# [Azure CLI](#tab/cli)
-### With cURL
+Deploy a WAR package to Tomcat or JBoss EAP by using the [az webapp deploy](/cli/azure/webapp#az_webapp_deploy) command. Specify the path to your local Java package for `--src-path`.
-The following example uses the cURL tool to deploy a .war file. Replace the placeholders `<username>`, `<war-file-path>`, and `<app-name>`. When prompted by cURL, type in the password.
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path ./<package-name>.war
+```
-```bash
-curl -X POST -u <username> --data-binary @"<war-file-path>" https://<app-name>.scm.azurewebsites.net/api/wardeploy
+
+The following example uses the `--src-url` parameter to specify the URL of an Azure Storage account that the web app should pull the ZIP from.
+
+```azurecli-interactive
+az webapp deploy --resource-group <grou-name> --name <app-name> --src-url "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3
```
-### With PowerShell
+The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the package and can be fully customized.
+
+# [Azure PowerShell](#tab/powershell)
-The following example uses [Publish-AzWebapp](/powershell/module/az.websites/publish-azwebapp) upload the .war file. Replace the placeholders `<group-name>`, `<app-name>`, and `<war-file-path>`.
+The following example uses [Publish-AzWebapp](/powershell/module/az.websites/publish-azwebapp) to upload the .war file. Replace the placeholders `<group-name>`, `<app-name>`, and `<package-path>` (only WAR and JAR files are supported in Azure PowerShell).
```powershell
-Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath <war-file-path>
+Publish-AzWebapp -ResourceGroupName <group-name> -Name <app-name> -ArchivePath <package-path>
```
+# [Kudu API](#tab/api)
+
+The following example uses the cURL tool to deploy a .war, .jar, or .ear file. Replace the placeholders `<username>`, `<file-path>`, `<app-name>`, and `<package-type>` (`war`, `jar`, or `ear`, accordingly). When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md).
+
+```bash
+curl -X POST -u <username> --data-binary @"<file-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=<package-type>
+```
++
+The following example uses the `packageUri` parameter to specify the URL of an Azure Storage account that the web app should pull the WAR from. The WAR file could also be a JAR or EAR file.
+
+```bash
+curl -X POST -u <username> https://<app-name>.scm.azurewebsites.net/api/publish -d '{"packageUri": "https://storagesample.blob.core.windows.net/sample-container/myapp.war?sv=2021-10-01&sb&sig=slk22f3UrS823n4kSh8Skjpa7Naj4CG3"}'
+```
+
+For more information, see [Kudu publish API reference](#kudu-publish-api-reference)
+
+# [Kudu UI](#tab/kudu-ui)
+
+The Kudu UI does not support deploying JAR, WAR, or EAR applications. Please use one of the other options.
+
+--
+
+## Deploy individual files
+
+# [Azure CLI](#tab/cli)
+
+Deploy a startup script, library, and static file to your web app by using the [az webapp deploy](/cli/azure/webapp#az_webapp_deploy) command with the `--type` parameter.
+
+If you deploy a startup script this way, App Service automatically uses your script to start your app.
+
+The CLI command uses the [Kudu publish API](#kudu-publish-api-reference) to deploy the files and can be fully customized.
+
+### Deploy a startup script
+
+```bash
+az webapp deploy --resource group <group-name> --name <app-name> --src-path scripts/startup.sh --type=startup
+```
+
+### Deploy a library file
+
+```bash
+az webapp deploy --resource group <group-name> --name <app-name> --src-path driver.jar --type=lib
+```
+
+### Deploy a static file
+
+```bash
+az webapp deploy --resource group <group-name> --name <app-name> --src-path config.json --type=static
+```
+
+# [Azure PowerShell](#tab/powershell)
+
+Not supported. See Azure CLI or Kudu API.
+
+# [Kudu API](#tab/api)
+
+### Deploy a startup script
+
+The following example uses the cURL tool to deploy a startup file for their application.Replace the placeholders `<username>`, `<startup-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md).
+
+```bash
+curl -X POST -u <username> --data-binary @"<startup-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=startup
+```
+
+### Deploy a library file
+
+The following example uses the cURL tool to deploy a library file for their application. Replace the placeholders `<username>`, `<lib-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md).
+
+```bash
+curl -X POST -u <username> --data-binary @"<lib-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=lib&path="/home/site/deployments/tools/my-lib.jar"
+```
+
+### Deploy a static file
+
+The following example uses the cURL tool to deploy a config file for their application. Replace the placeholders `<username>`, `<config-file-path>`, and `<app-name>`. When prompted by cURL, type in the [deployment password](deploy-configure-credentials.md).
+
+```bash
+curl -X POST -u <username> --data-binary @"<config-file-path>" https://<app-name>.scm.azurewebsites.net/api/publish&type=static&path="/home/site/deployments/tools/my-config.json"
+```
+
+# [Kudu UI](#tab/kudu-ui)
+
+The Kudu UI does not support deploying individual files. Please use the Azure CLI or Kudu REST API.
+
+--
+
+## Kudu publish API reference
+
+The `publish` Kudu API allows you to specify the same parameters from the CLI command as URL query parameters. To authenticate with the Kudu API, you can use basic authentication with your app's [deployment credentials](deploy-configure-credentials.md#userscope).
+
+The table below shows the available query parameters, their allowed values, and descriptions.
+
+| Key | Allowed values | Description | Required | Type |
+|-|-|-|-|-|
+| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/scripts`. The target path can be specified with `path`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `path`. | Yes | String |
+| `restart` | `true`\|`false` | By default, the API restarts the app following the deployment operation (`restart=true`). To deploy multiple artifacts, prevent restarts on the all but the final deployment by setting `restart=false`. | No | Boolean |
+| `clean` | `true`\|`false` | Specifies whether to clean (delete) the target deployment before deploying the artifact there. | No | Boolean |
+| `ignorestack` | `true`\|`false` | The publish API uses the `WEBSITE_STACK` environment variable to choose safe defaults depending on your site's language stack. Setting this parameter to `false` disables any language-specific defaults. | No | Boolean |
+| `path` | `"<absolute-path>"` | The absolute path to deploy the artifact to. For example, `"/home/site/deployments/tools/driver.jar"`, `"/home/site/scripts/helper.sh"`. | No | String |
## Next steps
-For more advanced deployment scenarios, try [deploying to Azure with Git](deploy-local-git.md). Git-based deployment to Azure
-enables version control, package restore, MSBuild, and more.
+For more advanced deployment scenarios, try [deploying to Azure with Git](deploy-local-git.md). Git-based deployment to Azure enables version control, package restore, MSBuild, and more.
## More resources * [Kudu: Deploying from a zip file](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file) * [Azure App Service Deployment Credentials](deploy-ftp.md)
-* [Environment variables and app settings reference](reference-app-settings.md)
+* [Environment variables and app settings reference](reference-app-settings.md)
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-auth-aad.md
az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePla
### Push to Azure from Git
-1. Since you're deploying the `main` branch, you need to set the default deployment branch for your two App Service apps to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/appsettings#az_webapp_config_appsettings_set) command.
+1. Since you're deploying the `main` branch, you need to set the default deployment branch for your two App Service apps to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
```azurecli-interactive az webapp config appsettings set --name <front-end-app-name> --resource-group myAuthResourceGroup --settings DEPLOYMENT_BRANCH='main'
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-ruby-postgres-app.md
az webapp config appsettings set --name <app-name> --resource-group myResourceGr
### Push to Azure from Git
-1. Since you're deploying the `main` branch, you need to set the default deployment branch for your App Service app to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/appsettings#az_webapp_config_appsettings_set) command.
+1. Since you're deploying the `main` branch, you need to set the default deployment branch for your App Service app to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
```azurecli-interactive az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DEPLOYMENT_BRANCH='main'
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/create-url-route-portal.md
On the **Configuration** tab, you'll connect the frontend and backend pool you c
> [!NOTE] > You do not need to add a custom */** path rule to handle default cases. This is automatically handled by the default backend pool.
+> [!NOTE]
+> Wildcard delimiter **\*** is only honored at the end of the rule. For more information and supported path based rules examples, see [URL Path Based Routing overview](url-route-overview.md#pathpattern).
+ ### Review + create tab Review the settings on the **Review + create** tab, and then select **Create** to create the virtual network, the public IP address, and the application gateway. It may take several minutes for Azure to create the application gateway. Wait until the deployment finishes successfully before moving on to the next section.
When no longer needed, delete the resource group and all related resources. To d
## Next steps > [!div class="nextstepaction"]
-> [Enable end to end TLS on Azure Application Gateway](./ssl-overview.md)
+> [Enable end to end TLS on Azure Application Gateway](./ssl-overview.md)
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/custom-error.md
To create a custom error page, you must have:
- a publicly accessible Azure storage blob for the location. - an *.htm or *.html extension type.
-The size of the error page must be less than 1 MB. If there are images linked in the error page, they must be either publicly accessible absolute URLs or base64 encoded image inline in the custom error page. Relative links with images in the same blob location are currently not supported.
+The size of the error page must be less than 1 MB. You may reference either internal or external images/CSS for this HTML file. For externally referenced resources, use absolute URLs that are publicly accessible. Be aware of the HTML file size when using internal images (Base64-encoded inline image) or CSS. Relative links with files in the same blob location are currently not supported.
-After you specify an error page, the application gateway downloads it from the storage blob location and saves it to the local application gateway cache. Then the error page is served directly from the application gateway. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
+After you specify an error page, the application gateway downloads it from the storage blob location and saves it to the local application gateway cache. Then, that HTML page is served by the application gateway, whereas the externally referenced resources are fetched directly by the client. To modify an existing custom error page, you must point to a different blob location in the application gateway configuration. The application gateway doesn't periodically check the blob location to fetch new versions.
## Portal configuration
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
|[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No | |[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No | |[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test |No |
+|[Windows Admin Center](/windows-server/manage/windows-admin-center/azure/manage-vm) | Use Windows Admin Center (preview) in the Azure portal to manage the Windows Server operating system inside an Azure VM. This is only supported for machines using Windows Server 2016 or higher. Automanage configures Windows Admin Center over a Private IP address. If you wish to connect with Windows Admin Center over a Public IP address, please open an inbound port rule for port 6516. Automanage onboards Windows Admin Center for the Dev/Test profile by default. Use the preferences to enable or disable Windows Admin Center for the Production and Dev/Test environments. |Production, Dev/Test |Yes |
|[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/add-user-assigned-identity.md
Title: Using a user-assigned managed identity for an Azure Automation account (p
description: This article describes how to set up a user-assigned managed identity for Azure Automation accounts. Previously updated : 07/09/2021 Last updated : 08/26/2021
An Automation account can use its user-assigned managed identity to obtain token
Before you can use your user-assigned managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource.
-This example uses Azure PowerShell to show how to assign the Contributor role in the subscription to the target Azure resource. The Contributor role is used as an example and may or may not be required in your case. Alternatively, you can use portal also to assign the role to the target Azure resource.
+Follow the principal of least privilege and carefully assign permissions only required to execute your runbook. For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account or managed identity needs to be only for starting or stopping the VM. Similarly, if a runbook is reading from blob storage, then assign read only permissions.
+
+This example uses Azure PowerShell to show how to assign the Contributor role in the subscription to the target Azure resource. The Contributor role is used as an example and may or may not be required in your case. Alternatively, you can also assign the role to the target Azure resource in the [Azure portal](../role-based-access-control/role-assignments-portal.md).
```powershell New-AzRoleAssignment `
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
description: This article describes how to use Azure role-based access control (
keywords: automation rbac, role based access control, azure rbac Previously updated : 06/15/2021 Last updated : 08/26/2021
In Azure Automation, access is granted by assigning the appropriate Azure role t
| Owner |The Owner role allows access to all resources and actions within an Automation account including providing access to other users, groups, and applications to manage the Automation account. | | Contributor |The Contributor role allows you to manage everything except modifying other userΓÇÖs access permissions to an Automation account. | | Reader |The Reader role allows you to view all the resources in an Automation account but can't make any changes. |
+| Automation Contributor | The Automation Contributor role allows you to manage all resources in the Automation account, except modifying other user's access permissions to an Automation account. |
| Automation Operator |The Automation Operator role allows you to view runbook name and properties and to create and manage jobs for all runbooks in an Automation account. This role is helpful if you want to protect your Automation account resources like credentials assets and runbooks from being viewed or modified but still allow members of your organization to execute these runbooks. | |Automation Job Operator|The Automation Job Operator role allows you to create and manage jobs for all runbooks in an Automation account.| |Automation Runbook Operator|The Automation Runbook Operator role allows you to view a runbookΓÇÖs name and properties.|
A Reader can view all the resources in an Automation account but can't make any
||| |Microsoft.Automation/automationAccounts/read|View all resources in an Automation account. |
+### Automation Contributor
+
+An Automation Contributor can manage all resources in the Automation account except access. The following table shows the permissions granted for the role:
+
+|**Actions** |**Description** |
+|||
+|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types under Automation account.|
+|Microsoft.Authorization/*/read|Read roles and role assignments.|
+|Microsoft.Resources/deployments/*|Create and manage resource group deployments.|
+|Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.|
+|Microsoft.Support/*|Create and manage support tickets.|
+
+> [!NOTE]
+> The Automation Contributor role can be used to access any resource using the managed identity, if appropriate permissions are set on the target resource, or using a Run As account. An Automation Run As account are by default, configured with Contributor rights on the subscription. Follow the principal of least privilege and carefully assign permissions only required to execute your runbook. For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account or managed identity needs to be only for starting or stopping the VM. Similarly, if a runbook is reading from blob storage, then assign read only permissions.
+>
+> When assigning permissions, it is recommended to use Azure role based access control (RBAC) assigned to a managed identity. Review our [best approach](../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md) recommendations for using a system or user-assigned managed identity, including management and governance during its lifetime.
+ ### Automation Operator An Automation Operator is able to create and manage jobs, and read runbook names and properties for all runbooks in an Automation account.
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
Title: Using a system-assigned managed identity for an Azure Automation account
description: This article describes how to set up managed identity for Azure Automation accounts. Previously updated : 07/24/2021 Last updated : 08/12/2021
An Automation account can use its system-assigned managed identity to get tokens
Before you can use your system-assigned managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource.
+Follow the principal of least privilege and carefully assign permissions only required to execute your runbook. For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account or managed identity needs to be only for starting or stopping the VM. Similarly, if a runbook is reading from blob storage, then assign read only permissions.This example uses Azure PowerShell to show how to assign the Contributor
+ This example uses Azure PowerShell to show how to assign the Contributor role in the subscription to the target Azure resource. The Contributor role is used as an example, and may or may not be required in your case. ```powershell
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-overview.md
Title: Security overview description: Security information about Azure Arc-enabled servers. Previously updated : 07/16/2021 Last updated : 08/30/2021 # Azure Arc for servers security overview
This article describes the security configuration and considerations you should
## Identity and access control
-Each Azure Arc-enabled server has a managed identity as part of a resource group inside an Azure subscription, this identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server.
+Each Azure Arc-enabled server has a managed identity as part of a resource group inside an Azure subscription. That identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server.
:::image type="content" source="./media/security-overview/access-control-page.png" alt-text="Azure Arc-enabled server access control" border="false" lightbox="./media/security-overview/access-control-page.png":::
Users as a member of the **Azure Connected Machine Resource Administrator** role
## Agent security and permissions
-To manage the Azure Connected Machine agent (azcmagent) on Windows your user account needs to be a member of the local Administrators group. On Linux, you must have root access permissions.
+To manage the Azure Connected Machine agent (azcmagent) on Windows, your user account needs to be a member of the local Administrators group. On Linux, you must have root access permissions.
The Azure Connected Machine agent is composed of three services, which run on your machine.
The guest configuration and extension services run as Local System on Windows, a
## Using a managed identity with Arc-enabled servers
-By default, the Azure Active Directory system assigned identity used by Arc can only be used to update the status of the Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign additional roles to the identity if an application on your server uses the system assigned identity to access other Azure services.
+By default, the Azure Active Directory system assigned identity used by Arc can only be used to update the status of the Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign other roles to the identity if an application on your server uses the system assigned identity to access other Azure services. To learn more about configuring a system-assigned managed identity to access Azure resources, see [Authenticate against Azure resources with Arc-enabled servers](managed-identity-authentication.md).
While the Hybrid Instance Metadata Service can be accessed by any application running on the machine, only authorized applications can request an Azure AD token for the system assigned identity. On the first attempt to access the token URI, the service will generate a randomly generated cryptographic blob in a location on the file system that only trusted callers can read. The caller must then read the file (proving it has appropriate permission) and retry the request with the file contents in the authorization header to successfully retrieve an Azure AD token.
While the Hybrid Instance Metadata Service can be accessed by any application ru
* On Linux, the caller must be a member of the **himds** group to read the blob.
+To learn more about using a managed identity with Arc-enabled servers to authenticate and access Azure resources, see the following video.
+
+> [!VIDEO https://www.youtube.com/embed/4hfwxwhWcP4]
+ ## Using disk encryption The Azure Connected Machine agent uses public key authentication to communicate with the Azure service. After you onboard a server to Azure Arc, a private key is saved to the disk and used whenever the agent communicates with Azure. If stolen, the private key can be used on another server to communicate with the service and act as if it were the original server. This includes getting access to the system assigned identity and any resources that identity has access to. The private key file is protected to only allow the **himds** account access to read it. To prevent offline attacks, we strongly recommend the use of full disk encryption (for example, BitLocker, dm-crypt, etc.) on the operating system volume of your server.
azure-cache-for-redis Cache Best Practices Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-connection.md
Configure your client connections to retry commands with exponential backoff. Fo
## Test resiliency
-Test your system's resiliency to connection breaks using a [Reboot](cache-administration.md#reboot) to simulate a patch. For more information on testing your performance, see [Performance testing](cache-best-practices-performance.md).
+Test your system's resiliency to connection breaks using a [reboot](cache-administration.md#reboot) to simulate a patch. For more information on testing your performance, see [Performance testing](cache-best-practices-performance.md).
## Configure appropriate timeouts
Avoid creating many connections at the same time when reconnecting after a conne
If you're reconnecting many client instances, consider staggering the new connections to avoid a steep spike in the number of connected clients. > [!NOTE]
-> When you use the `StackExchange.Redis` client library, set `abortConnect` to `false` in your connection string. We recommend letting the `ConnectionMultiplexer` handle reconnection. For more information, see [StackExchange.Redis best practices](/azure/azure-cache-for-redis/cache-planning-faq#stackexchangeredis-best-practices).
+> When you use the `StackExchange.Redis` client library, set `abortConnect` to `false` in your connection string. We recommend letting the `ConnectionMultiplexer` handle reconnection. For more information, see [StackExchange.Redis best practices](/azure/azure-cache-for-redis/cache-management-faq#stackexchangeredis-best-practices).
## Avoid leftover connections
Caches have limits on the number of client connections per cache tier. Ensure th
## Advance maintenance notification
-Use notifications to learn of upcoming maintenance. For more information, see [Can I be notified in advance of a planned maintenance?](cache-failover.md#can-i-be-notified-in-advance-of-a-planned-maintenance).
+Use notifications to learn of upcoming maintenance. For more information, see [Can I be notified in advance of a planned maintenance](cache-failover.md#can-i-be-notified-in-advance-of-a-planned-maintenance).
## Schedule maintenance window
Adjust your cache settings to accommodate maintenance. For more information abou
## More design patterns for resilience
-Apply design patterns for resiliency. For more information, see [How do I make my application resilient?](cache-failover.md#how-do-i-make-my-application-resilient).
+Apply design patterns for resiliency. For more information, see [How do I make my application resilient](cache-failover.md#how-do-i-make-my-application-resilient).
## Idle timeout
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-performance.md
The `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TL
**Pre-test setup**: Prepare the cache instance with data required for the latency and throughput testing:
-```azurecli
+```dos
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t SET -n 10 -d 1024 ``` **To test latency**: Test GET requests using a 1k payload:
-```azurecli
+```dos
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -d 1024 -P 50 -c 4 ``` **To test throughput:** Pipelined GET requests with 1k payload:
-```azurecli
+```dos
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50 ```
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
### Log custom telemetry
-By default, Functions writes output as traces to Application Insights. For more control, you can instead use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure) to send custom telemetry data to your Application Insights instance.
+By default, some of the telemetry is collected for Functions apps. This telemetry ends up as traces in Application Insights. For more control, you can instead use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure) to send custom telemetry data to your Application Insights instance.
+You can find the list of supported libraries [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] > To use the OpenCensus Python Extensions, you need to enable [Python Extensions](#python-worker-extensions) by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1` in `local.settings.json` and application settings
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
The following procedure describes how to create a metric alert rule in Azure por
3. Click **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource. > [!NOTE]
- > As a target, you can select an entire subscription, a resource group, or a specific resource. If you chose a subscription or a resource group as a target, and also selected a resource type, the rule will apply to all resources of that type within the selected subscription or a reosurce group. If you chose a specific target resource, the rule will apply only to that resource. You can't explicitly select multiple subscriptions, resource groups, or resources using the target selector.
+ > As a target, you can select an entire subscription, a resource group, or a specific resource. If you chose a subscription or a resource group as a target, and also selected a resource type, the rule will apply to all resources of that type within the selected subscription or a resource group. If you chose a specific target resource, the rule will apply only to that resource. You can't explicitly select multiple subscriptions, resource groups, or resources using the target selector.
4. If the selected resource has activity log operations you can create alerts on, **Available signals** on the bottom right will include Activity Log. You can view the full list of resource types supported for activity log alerts in this [article](../../role-based-access-control/resource-provider-operations.md).
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/solution-office-365.md
Last updated 03/30/2020
> ### Q: How I can use the Azure Sentinel out-of-the-box security-oriented content? > Azure Sentinel provides out-of-the-box security-oriented dashboards, custom alert queries, hunting queries, investigation, and automated response capabilities based on the Office 365 and Azure AD logs. Explore the Azure Sentinel GitHub and tutorials to learn more: >
-> - [Detect threats out-of-the-box](/azure/azure-monitor/insights/articles/sentinel/detect-threats-built-in.md)
-> - [Create custom analytic rules to detect suspicious threats](/azure/azure-monitor/insights/articles/sentinel/detect-threats-custom.md)
-> - [Monitor your data](/azure/azure-monitor/insights/articles/sentinel/monitor-your-data.md)
-> - [Investigate incidents with Azure Sentinel](/azure/azure-monitor/insights/articles/sentinel/investigate-cases.md)
+> - [Detect threats out-of-the-box](/azure/sentinel/detect-threats-built-in)
+> - [Create custom analytic rules to detect suspicious threats](/azure/sentinel/detect-threats-custom)
+> - [Monitor your data](/azure/sentinel/monitor-your-data)
+> - [Investigate incidents with Azure Sentinel](/azure/sentinel/investigate-cases)
> - [Set up automated threat responses in Azure Sentinel](../../sentinel/tutorial-respond-threats-playbook.md) > - [Azure Sentinel GitHub community](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks) >
The following table provides sample log queries for update records collected by
* Use [log queries in Azure Monitor](../logs/log-query-overview.md) to view detailed update data. * [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) to display your favorite Office 365 search queries.
-* [Create alerts](../alerts/alerts-overview.md) to be proactively notified of important Office 365 activities.
+* [Create alerts](../alerts/alerts-overview.md) to be proactively notified of important Office 365 activities.
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-configure.md
Configuring a Private Link requires a few steps:
This article reviews how it's done through the Azure portal and provides an example Azure Resource Manager (ARM) template to automate the process.
-## Create a Private Link connection
+## Create a Private Link connection through the Azure portal
+In this section, we review the process of setting up a Private Link through the Azure portal, step by step. See [Use APIs and command line](#use-apis-and-command-line) to create and manage a Private Link using the command line or an Azure Resource Manager template (ARM template).
-Start by creating an Azure Monitor Private Link Scope resource.
+### Create an Azure Monitor Private Link Scope
1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**.
If you set **Allow public network access for ingestion** to **No**, then clients
If you set **Allow public network access for queries** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
-### Exceptions
-
-#### Diagnostic logs
-Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
-
-#### Azure Resource Manager
-Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
-
-Additionally, specific experiences (such as the LogicApp connector, Update Management solution, and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
--
-## Review and validate your Private Link setup
-
-### Reviewing your Endpoint's DNS settings
-The Private Endpoint you created should now have an five DNS zones configured:
-
-* privatelink-monitor-azure-com
-* privatelink-oms-opinsights-azure-com
-* privatelink-ods-opinsights-azure-com
-* privatelink-agentsvc-azure-automation-net
-* privatelink-blob-core-windows-net
-
-> [!NOTE]
-> Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses showns in the below images are only examples. Your configuration should instead show private IPs from your own network.
-
-#### Privatelink-monitor-azure-com
-This zone covers the global endpoints used by Azure Monitor, meaning these endpoints serve requests considering all resources, not a specific one. This zone should have endpoints mapped for:
-* `in.ai` - Application Insights ingestion endpoint (both a global and a regional entry)
-* `api` - Application Insights and Log Analytics API endpoint
-* `live` - Application Insights live metrics endpoint
-* `profiler` - Application Insights profiler endpoint
-* `snapshot` - Application Insights snapshots endpoint
-[![Screenshot of Private DNS zone monitor-azure-com.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
-
-#### privatelink-oms-opinsights-azure-com
-This zone covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone oms-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
-
-#### privatelink-ods-opinsights-azure-com
-This zone covers workspace-specific mapping to ODS endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone ods-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
-
-#### privatelink-agentsvc-azure-automation-net
-This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone agent svc-azure-automation-net.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
-
-#### privatelink-blob-core-windows-net
-This zone configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
-[![Screenshot of Private DNS zone blob-core-windows-net.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
-> [!NOTE]
-> This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds).
--
-### Validating you are communicating over a Private Link
-* To validate your requests are now sent through the Private Endpoint, you can review them with a network tracking tool or even your browser. For example, when attempting to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint, in this example it's *172.17.0.9*.
+## Use APIs and command line
- Note: Some browsers may use other DNS settings (see [Browser DNS settings](./private-link-design.md#browser-dns-settings)). Make sure your DNS settings apply.
+You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
-* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources).
+### Create and manage Azure Monitor Private Link Scopes (AMPLS)
+To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
-* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default.
+#### Create AMPLS with Open access modes - CLI example
+The below CLI command creates a new AMPLS resource named "my-scope", with both query and ingestion access modes set to Open.
+```
+az resource create -g "my-resource-group" --name "my-scope" --api-version "2021-07-01-preview" --resource-type Microsoft.Insights/privateLinkScopes --properties "{\"accessModeSettings\":{\"queryAccessMode\":\"Open\", \"ingestionAccessMode\":\"Open\"}}"
+```
+#### Create AMPLS with mixed access modes - PowerShell example
+The below PowerShell script creates a new AMPLS resource named "my-scope", with the query access mode Open but the ingestion access modes set to PrivateOnly (meaning it will allow ingestion only to resources in the AMPLS).
-## Use APIs and command line
+```
+# scope details
+$scopeSubscriptionId = "ab1800bd-ceac-48cd-...-..."
+$scopeResourceGroup = "my-resource-group"
+$scopeName = "my-scope"
+$scopeProperties = @{
+ accessModeSettings = @{
+ queryAccessMode = "Open";
+ ingestionAccessMode = "PrivateOnly"
+ }
+}
-You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
+# login
+Connect-AzAccount
-To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
+# select subscription
+Select-AzSubscription -SubscriptionId $scopeSubscriptionId
-To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+# create private link scope resource
+$scope = New-AzResource -Location "Global" -Properties $scopeProperties -ResourceName $scopeName -ResourceType "Microsoft.Insights/privateLinkScopes" -ResourceGroupName $scopeResourceGroup -ApiVersion "2021-07-01-preview" -Force
+```
-### Example Azure Resource Manager template (ARM template)
+#### Create AMPLS - Azure Resource Manager template (ARM template)
The below Azure Resource Manager template creates: * A private link scope (AMPLS) named "my-scope" * A Log Analytics workspace named "my-workspace" * Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection"
+> [!NOTE]
+> The below ARM template uses API version "2019-04-01", which doesn't support setting the AMPLS access modes. When using the below template, the resulting AMPLS is set with QueryAccessMode="Open" and IngestionAccessMode="PrivateOnly", meaning it allows queries to run on resources both in and out of the AMPLS, but limits ingestion to reach only Private Link resources.
``` {
The below Azure Resource Manager template creates:
} ```
+### Set AMPLS access flags - PowerShell example
+To set the access mode flags on your AMPLS, you can use the following PowerShell script. The following script sets the flags to Open. To use the Private Only mode, use the value "PrivateOnly".
+
+```
+# scope details
+$scopeSubscriptionId = "ab1800bd-ceac-48cd-...-..."
+$scopeResourceGroup = "my-resource-group-name"
+$scopeName = "my-scope"
+
+# login
+Connect-AzAccount
+
+# select subscription
+Select-AzSubscription -SubscriptionId $scopeSubscriptionId
+
+# get private link scope resource
+$scope = Get-AzResource -ResourceType Microsoft.Insights/privateLinkScopes -ResourceGroupName $scopeResourceGroup -ResourceName $scopeName -ApiVersion "2021-07-01-preview"
+
+# set access mode settings
+$scope.Properties.AccessModeSettings.QueryAccessMode = "Open";
+$scope.Properties.AccessModeSettings.IngestionAccessMode = "Open";
+$scope | Set-AzResource -Force
+```
+
+### Set resource access flags
+To manage the workspace or component access flags, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
++
+## Review and validate your Private Link setup
+
+### Reviewing your Endpoint's DNS settings
+The Private Endpoint you created should now have an five DNS zones configured:
+
+* privatelink-monitor-azure-com
+* privatelink-oms-opinsights-azure-com
+* privatelink-ods-opinsights-azure-com
+* privatelink-agentsvc-azure-automation-net
+* privatelink-blob-core-windows-net
+
+> [!NOTE]
+> Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses showns in the below images are only examples. Your configuration should instead show private IPs from your own network.
+
+#### Privatelink-monitor-azure-com
+This zone covers the global endpoints used by Azure Monitor, meaning these endpoints serve requests considering all resources, not a specific one. This zone should have endpoints mapped for:
+* `in.ai` - Application Insights ingestion endpoint (both a global and a regional entry)
+* `api` - Application Insights and Log Analytics API endpoint
+* `live` - Application Insights live metrics endpoint
+* `profiler` - Application Insights profiler endpoint
+* `snapshot` - Application Insights snapshots endpoint
+[![Screenshot of Private DNS zone monitor-azure-com.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
+
+#### privatelink-oms-opinsights-azure-com
+This zone covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone oms-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
+
+#### privatelink-ods-opinsights-azure-com
+This zone covers workspace-specific mapping to ODS endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone ods-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
+
+#### privatelink-agentsvc-azure-automation-net
+This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
+[![Screenshot of Private DNS zone agent svc-azure-automation-net.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
+
+#### privatelink-blob-core-windows-net
+This zone configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
+[![Screenshot of Private DNS zone blob-core-windows-net.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
+> [!NOTE]
+> This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds).
++
+### Validating you are communicating over a Private Link
+* To validate your requests are now sent through the Private Endpoint, you can review them with a network tracking tool or even your browser. For example, when attempting to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint, in this example it's *172.17.0.9*.
+
+ Note: Some browsers may use other DNS settings (see [Browser DNS settings](./private-link-design.md#browser-dns-settings)). Make sure your DNS settings apply.
+
+* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources).
+
+* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default.
++ ## Next steps - Learn about [private storage](private-storage.md)
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
Last updated 08/01/2021
# Design your Private Link setup
-Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
+Before you set up your Azure Monitor Private Link, consider your network topology, and specifically your DNS routing topology.
As discussed in [How it work](./private-link-security.md#how-it-works), setting up a Private Link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the Private Endpoint but also all other networks the share the same DNS.
-> [!NOTE]
-> The simplest and most secure approach would be:
-> 1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
-> 2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS.
-> 3. Block network egress traffic as much as possible.
+The simplest and most secure approach would be:
+1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
+2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS.
+3. Block network egress traffic as much as possible.
-If for some reason you can't use a single Private Link and a single Azure Monitor Private Link Scope (AMPLS), the next best thing would be to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
+If you can't use a single Private Link and a single Azure Monitor Private Link Scope (AMPLS), the next best thing would be to create isolated Private Link connections for isolated networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
## Plan by network topology
-### Hub-spoke networks
-Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private Link on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
+
+### Guiding principle: Avoid DNS overrides by using a single AMPLS
+Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks.
+
+In the below diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the **same global endpoints** with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints.
+
+To avoid this conflict, create only a single AMPLS object per DNS.
+
+![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
++
+### Hub-and-spoke networks
+Hub-and-spoke topologies can avoid the issue of DNS overrides by setting the Private Link connection on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private
Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply. ### Isolated networks
-#If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. Once that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
+If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. After that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
### Testing locally: Edit your machine's hosts file instead of the DNS As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints.
As a local bypass to the All or Nothing behavior, you can select not to update y
That approach isn't recommended for production environments.
+## Control how Private Links apply to your networks
+Private Link access modes (introduced on August 2021) allow you to control how Private Links affect your network traffic. These settings can apply to your AMPLS object (to affect all connected networks) or to specific networks connected to it.
+
+Choosing the proper access mode has detrimental effects on your network traffic. Each of these modes can be set for ingestion and queries, separately:
+
+* Private Only - allows the VNet to reach only Private Link resources (resources in the AMPLS). That's the most secure mode of work, preventing data exfiltration. To achieve that, traffic to Azure Monitor resources out of the AMPLS is blocked.
+![Diagram of AMPLS Private Only access mode](./media/private-link-security/ampls-private-only-access-mode.png)
+* Open - allows the VNet to reach both Private Link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). While the Open access mode doesn't prevent data exfiltration, it still offers the other benefits of Private Links - traffic to Private Link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode allows for a gradual onboarding process, or a mixed mode of work, combining Private Link access to some resources and public access to others.
+![Diagram of AMPLS Open access mode](./media/private-link-security/ampls-open-access-mode.png)
+Access modes are set separately for ingestion and queries. For example, you can set the Private Only mode for ingestion and the Open mode for queries.
+
+> [!NOTE]
+> Apply caution when selecting your access mode: Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant. If you can't add all Azure Monitor resources to the AMPLS, we recommend that you use the Open mode and add select resources to your AMPLS. Only after adding all Azure Monitor resources to your AMPLS, switch to the Private Only mode for maximum security.
+
+### Setting access modes for specific networks
+The access modes set on the AMPLS resource affect all networks, but you can override these settings for specific networks.
+
+In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private Only mode. As a result, requests from VNet1 can reach Workspace1 and Component2 over a Private Link, and Component3 not over a Private Link (if it [accepts traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). However, VNet2 requests won't be able to reach Component3.
+![Diagram of mixed access modes](./media/private-link-security/ampls-mixed-access-modes.png)
++ ## Consider AMPLS limits The AMPLS object has the following limits: * A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
In the below diagram:
## Control network access to your resources
-Your Log Analytics workspaces or Application Insights components can be set to accept or block access from public networks, meaning networks not connected to the resource's AMPLS.
-That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (i.e. specific VNets), but still choose to accept queries from all networks, public and private.
-Note that blocking queries from public networks means, clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data are also affected by that setting.
+Your Log Analytics workspaces or Application Insights components can be set to:
+* Accept or block ingestion from public networks (networks not connected to the resource AMPLS).
+* Accept or block queries from public networks (networks not connected to the resource AMPLS).
+
+That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (meaning specific VNets), but still choose to accept queries from all networks, public and private.
+
+Blocking queries from public networks means clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data are also affected by that setting.
+
+### Exceptions
+
+#### Diagnostic logs
+Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
+
+#### Azure Resource Manager
+Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
+Additionally, specific experiences (such as the LogicApp connector, Update Management solution, and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
## Application Insights considerations * YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
Last updated 10/05/2020
# Use Azure Private Link to connect networks to Azure Monitor
-With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints. For many services, you just set up an endpoint for each resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads.
+With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. An Azure Monitor Private Link connects a private endpoint to a set of Azure Monitor resources, defining the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS).
+ ## Advantages
For more information, see [Key Benefits of Private Link](../../private-link/pri
## How it works
-Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNets they're contained in) to one or more Azure Monitor resources - Log Analytics workspaces and Application Insights components.
+### Overview
+An Azure Monitor Private Link Scope connects private endpoints (and the VNets they're contained in) to one or more Azure Monitor resources - Log Analytics workspaces and Application Insights components.
![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png) * The Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic.
-* Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
+* Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
+* You can configure your Azure Monitor Private Link Scope (or specific networks) to use the preferred access mode - either allow traffic only to Private Link resources, or allow traffic to both Private Link resources and non-Private-Link resources (resources out of the AMPLS)
* You can configure each of your workspaces or components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources. > [!NOTE]
-> A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
+> A VNet can only connect to a single AMPLS, which lists up to 50 resources that can be reached over a Private Link.
-### Azure Monitor Private Links and your DNS: It's All or Nothing
-Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. When you set up a Private Link connection your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through the Private Link. When it comes to global endpoints, setting up a Private Link (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection only for a specific component or workspace.
+### Azure Monitor Private Link relies on your DNS
+When you set up a Private Link connection, your DNS zones are set to map Azure Monitor endpoints to private IPs in order to send traffic through the Private Link. Azure Monitor uses both resource-specific endpoints and regional or global endpoints that handle traffic to multiple workspaces/components. When it comes to regional and global endpoints, setting up a Private Link (even for a single resource) affects the DNS mapping that controls traffic to **all** resources. In other words, traffic to all workspaces or components could be affected by a single Private Link setup.
#### Global endpoints Most importantly, traffic to the below global endpoints will be sent through the Private Link:
That effectively means that all Application Insights traffic will be sent to the
Traffic to Application Insights resource not added to your AMPLS will not pass the Private Link validation, and will fail.
-![Diagram of All or Nothing behavior](./media/private-link-security/all-or-nothing.png)
- #### Resource-specific endpoints
-All Log Analytics endpoints except the Query endpoint, are workspace-specific. So, creating a Private Link to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through the Private Link.
+All Log Analytics endpoints except the Query endpoint, are workspace-specific. So, creating a Private Link to a specific Log Analytics workspace won't affect ingestion to other workspaces, which will continue to use the public endpoints.
++
+> [!NOTE]
+> Create only a single AMPLS for all networks that share the same DNS. Creaing multiple AMPLS resources will cause Azure Monitor DNS endpoints to override each other, and break existing environments.
-### Azure Monitor Private Link applies to all networks that share the same DNS
-Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks. That's especially important to note due to the "All or Nothing" behavior described above.
+### Private Link access modes: Private Only vs Open
+As discussed in [Azure Monitor Private Link relies on your DNS](#azure-monitor-private-link-relies-on-your-dns)), only a single AMPLS resource should be created for all networks that share the same DNS. As a result, organizations that use a single global or regional DNS in fact have a single Private Link to manage traffic to all Azure Monitor resources, across all global, or regional networks.
-![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
+For Private Links created before September 2021, that means -
+* Log ingestion works only for resources in the AMPLS. Ingestion to all other resources is denied (across all networks that share the same DNS), regardless of subscription or tenant.
+* Queries have a more open behavior, allowing query requests to reach even resources not in the AMPLS. The intention here was to avoid breaking customer queries to resources not in the AMPLS, and allow resource-centric queries to return the complete result set.
-In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints.
+However, this behavior proved to be too restrictive for some customers (since it breaks ingestion to resources not in the AMPLS), too permissive for others (since it allows querying resources not in the AMPLS) and generally confusing.
+Therefore, Private Links created starting September 2021 have new mandatory AMPLS settings, that explicitly set how Private Links should affect network traffic. When creating a new AMPLS resource, you're now required to select the desired access modes, for ingestion and queries separately.
+* Private Only mode - allows traffic only to Private Link resources
+* Open mode - uses Private Link to communicate with resources in the AMPLS, but also allows traffic to continue to other resources as well. See [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks) to learn more.
## Next steps - [Design your Private Link setup](private-link-design.md)
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/audio-button-led-behavior.md
Title: Azure Percept Audio button and LED states description: Learn more about the button and LED states of Azure Percept Audio--++ Last updated 08/03/2021
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-audio-datasheet.md
Title: Azure Percept Audio datasheet description: Check out the Azure Percept Audio datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-devkit-software-release-notes.md
Title: Azure Percept DK software release notes description: Information about changes made to the Azure Percept DK software. -+ Last updated 08/23/2021
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-dk-datasheet.md
Title: Azure Percept DK datasheet description: Check out the Azure Percept DK datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-vision-datasheet.md
Title: Azure Percept Vision datasheet description: Check out the Azure Percept Vision datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azureeyemodule Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azureeyemodule-overview.md
Title: Overview of the Azure Percept AI vision module
+ Title: Azure Percept Vision AI module
description: An overview of the azureeyemodule, which is the module responsible for running the AI vision workload on the Azure Percept DK.--++ Last updated 08/09/2021
-# What is azureeyemodule?
+# Azure Percept Vision AI module
Azureeyemodule is the name of the edge module responsible for running the AI vision workload on the Azure Percept DK. It's part of the Azure IoT suite of edge modules and is deployed to the Azure Percept DK during the [setup experience](./quickstart-percept-dk-set-up.md). This article provides an overview of the module and its architecture.
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/concept-security-configuration.md
Title: Azure Percept firewall configuration and security recommendations
+ Title: Azure Percept security recommendations
description: Learn more about Azure Percept firewall configuration and security recommendations -+ Last updated 03/25/2021
-# Azure Percept firewall configuration and security recommendations
+# Azure Percept security recommendations
Review the guidelines below for information on configuring firewalls and general security best practices with Azure Percept.
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/connect-over-cellular.md
Title: Connect Azure Percept over 5G or LTE networks description: This article explains how to connect the Azure Percept DK over 5G or LTE networks.--++ Last updated 07/28/2021
-# Connect the Azure Percept DK over 5G or LTE networks
+# Connect Azure Percept over 5G or LTE networks
The benefits of connecting Edge AI devices over 5G/LTE networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, 5G/LTE networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on 5G/LTE networks. Where only necessary information is sent to the cloud while most of the data is processed on the device. Today, the Azure Percept DK isn't able to connect directly to 5G/LTE networks. However, they can connect to 5G/LTE gateways using the built-in Ethernet and Wi-Fi capabilities. This article covers how this works.
azure-percept Delete Voice Assistant Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/delete-voice-assistant-application.md
Title: Delete your Azure Percept Audio voice assistant application description: This article shows you how to delete a previously created voice assistant application.--++ Last updated 08/03/2021
-# Delete your voice assistant application
+# Delete your Azure Percept Audio voice assistant application
These instructions will show you how to delete a voice assistant application from your Azure Percept Audio device.
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
Title: Install Azure Percept development tools description: Learn more about using the Dev Tools Pack Installer to accelerate advanced development with Azure Percept--++ Last updated 03/25/2021
-# Dev Tools Pack Installer overview
+# Install Azure Percept development tools
The Dev Tools Pack Installer is a one-stop solution that installs and configures all the tools required to develop an advanced intelligent edge solution.
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-capture-images.md
Title: Capture images for a no-code vision solution in Azure Percept Studio
+ Title: Capture images in Azure Percept Studio
description: How to capture images with your Azure Percept DK in Azure Percept Studio--++ Last updated 02/12/2021
-# Capture images for a vision project in Azure Percept Studio
+# Capture images in Azure Percept Studio
Follow this guide to capture images using Azure Percept DK for an existing vision project. If you haven't created a vision project yet, see the [no-code vision tutorial](./tutorial-nocode-vision.md).
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-configure-voice-assistant.md
Title: Configure your voice assistant application using Azure IoT Hub
+ Title: Configure your Azure Percept voice assistant application
description: Configure your voice assistant application using Azure IoT Hub--++ Last updated 02/15/2021
-# Configure your voice assistant application using Azure IoT Hub
+# Configure your Azure Percept voice assistant application
This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
azure-percept How To Connect Over Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-connect-over-ethernet.md
Title: How to launch the Azure Percept DK setup experience over Ethernet
+ Title: Connect to Azure Percept DK over Ethernet
description: This guide shows users how to connect to the Azure Percept DK setup experience when connected over an Ethernet connection.--++ Last updated 06/01/2021
-# How to launch the Azure Percept DK setup experience over Ethernet
+# Connect to Azure Percept DK over Ethernet
In this how-to guide you'll learn how to launch the Azure Percept DK setup experience over an Ethernet connection. It's a companion to the [Quick Start: Set up your Azure Percept DK and deploy your first AI model](./quickstart-percept-dk-set-up.md) guide. See each option outlined below and choose which one is most appropriate for your environment.
azure-percept How To Connect To Percept Dk Over Serial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md
Title: Connect to your Azure Percept DK over serial
+ Title: Connect to Azure Percept DK over serial
description: How to set up a serial connection to your Azure Percept DK with a USB to TTL serial cable--++ Last updated 02/03/2021
-# Connect to your Azure Percept DK over serial
+# Connect to Azure Percept DK over serial
Follow the steps below to set up a serial connection to your Azure Percept DK through [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-deploy-model.md
Title: Deploy a vision AI model to your Azure Percept DK
+ Title: Deploy a vision AI model to Azure Percept DK
description: Learn how to deploy a vision AI model to your Azure Percept DK from Azure Percept Studio--++ Last updated 02/12/2021
-# Deploy a vision AI model to your Azure Percept DK
+# Deploy a vision AI model to Azure Percept DK
Follow this guide to deploy a vision AI model to your Azure Percept DK from within Azure Percept Studio.
azure-percept How To Determine Your Update Strategy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-determine-your-update-strategy.md
Title: Determine your update strategy for Azure Percept DK description: Pros and cons of Azure Percept DK OTA or USB cable updates. Recommendation for choosing the best update approach for different users. -+ Last updated 08/23/2021
-# How to determine your update strategy
+# Determine your update strategy for Azure Percept DK
To keep your Azure Percept DK software update-to-date, Microsoft offers two update methods for the dev kit. **Update over USB cable** or **Over-the-air (OTA) update**.
azure-percept How To Get Hardware Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-get-hardware-support.md
Title: How to get hardware support for Azure Percept DK hardware from ASUS
+ Title: Get Azure Percept hardware support from ASUS
description: This guide shows you how to contact ASUS for technical support for the Azure Percept DK hardware. --++ Last updated 07/13/2021
-# Get support for your Azure Percept DK hardware from ASUS
+# Get Azure Percept hardware support from ASUS
As the OEM for the Azure Percept DK, ASUS provides technical support to all customer who purchased a device and business support for customers interested in purchasing devices. This article shows you how to contact ASUS to get support.
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-manage-voice-assistant.md
Title: Configure a voice assistant application within Azure Percept Studio
+ Title: Manage your Azure Percept voice assistant application
description: Configure a voice assistant application within Azure Percept Studio--++ Last updated 02/15/2021
-# Managing your voice assistant
+# Manage your Azure Percept voice assistant application
This article describes how to configure the keyword and commands of your voice assistant application within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). For guidance on configuring your keyword within IoT Hub instead of the portal, see this [how-to article](./how-to-configure-voice-assistant.md).
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
Title: Select your Azure Percept DK update package description: How to identify your Azure Percept DK version and select the best update package for it -+ Last updated 07/23/2021
azure-percept How To Set Up Advanced Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-set-up-advanced-network-settings.md
Title: Set up advanced network settings on the Azure Percept DK description: This article walks user through the Advanced Network Settings during the Azure Percept DK setup experience--++ Last updated 7/19/2021
-# Set up Advanced Network Settings on the Azure Percept DK
+# Set up advanced network settings on the Azure Percept DK
The Azure Percept DK allows you to control various networking components on the dev kit. This is done via the Advanced Networking Settings in the setup experience. To access these settings, you must [start the setup experience](./quickstart-percept-dk-set-up.md) and select **Access advanced network settings** on the **Network connection** page.
azure-percept How To Set Up Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-set-up-over-the-air-updates.md
Title: Set up Azure IoT Hub to deploy over-the-air updates description: Learn how to configure Azure IoT Hub to deploy updates over-the-air to Azure Percept DK -+ Last updated 03/30/2021
-# How to set up Azure IoT Hub to deploy over the air updates to your Azure Percept DK
+# Set up Azure IoT Hub to deploy over-the-air updates
Keep your Azure Percept DK secure and up to date using over-the-air updates. In a few simple steps, you will be able to set up your Azure environment with Device Update for IoT Hub and deploy the latest updates to your Azure Percept DK.
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-ssh-into-percept-dk.md
Title: Connect to your Azure Percept DK over SSH
+ Title: Connect to Azure Percept DK over SSH
description: Learn how to SSH into your Azure Percept DK with PuTTY--++ Last updated 03/18/2021
-# Connect to your Azure Percept DK over SSH
+# Connect to Azure Percept DK over SSH
Follow the steps below to set up an SSH connection to your Azure Percept DK through OpenSSH or [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-troubleshoot-setup.md
Title: Troubleshoot issues during the Azure Percept DK setup experience
+ Title: Troubleshoot the Azure Percept DK setup experience
description: Get troubleshooting tips for some of the more common issues found during the setup experience--++ Last updated 03/25/2021
-# Azure Percept DK setup experience troubleshooting guide
+# Troubleshoot the Azure Percept DK setup experience
Refer to the table below for workarounds to common issues found during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md). If your issue still persists, contact Azure customer support.
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-over-the-air.md
Title: Update your Azure Percept DK using over-the-air (OTA) updates
+ Title: Update Azure Percept DK over-the-air
description: Learn how to receive over-the air (OTA) updates to your Azure Percept DK -+ Last updated 03/30/2021
-# Update your Azure Percept DK using over-the-air (OTA) updates
+# Update Azure Percept DK over-the-air
Follow this guide to learn how to update the OS and firmware of the carrier board of your Azure Percept DK over-the-air (OTA) with Device Update for IoT Hub.
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-via-usb.md
Title: Update your Azure Percept DK over a USB-C cable connection
+ Title: Update Azure Percept DK over a USB-C connection
description: Learn how to update the Azure Percept DK over a USB-C cable connection -+ Last updated 03/18/2021
-# Update the Azure Percept DK over a USB-C cable connection
+# Update Azure Percept DK over a USB-C connection
This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here's an overview of what you will be doing during this procedure.
azure-percept How To View Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-telemetry.md
Title: View your Azure Percept DK's model inference telemetry description: Learn how to view your Azure Percept DK's vision model inference telemetry in Azure IoT Explorer--++ Last updated 02/17/2021
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-video-stream.md
Title: View your Azure Percept DK RTSP video stream description: Learn how to view the RTSP video stream from Azure Percept DK--++ Last updated 02/12/2021
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
Title: Azure Percept known issues description: Learn more about Azure Percept known issues and their workarounds--++ Last updated 03/25/2021
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-8020-integration.md
Title: Azure Percept DK and 80/20 integration
+ Title: Azure Percept DK 80/20 integration
description: Learn more about how Azure Percept DK integrates with the 80/20 railing system.--++ Last updated 03/24/2021
-# Azure Percept DK 80/20 integration overview
+# Azure Percept DK 80/20 integration
The Azure Percept DK and Audio Accessory were designed to integrate with the [80/20 T-slot aluminum building system](https://8020.net/).
azure-percept Overview Advanced Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-advanced-code.md
Title: Azure Percept advanced development
+ Title: Advanced development with Azure Percept
description: Learn more about advanced development tools on Azure Percept--++ Last updated 03/23/2021
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-ai-models.md
Title: Azure Percept sample AI models description: Learn more about the AI models available for prototyping and deployment--++ Last updated 03/23/2021
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-audio.md
Title: Azure Percept Audio device overview description: Learn more about Azure Percept Audio--++ Last updated 03/23/2021
-# Introduction to Azure Percept Audio
+# Azure Percept Audio device overview
Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services. Azure Percept Audio is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-dk.md
Title: Azure Percept DK overview
-description: Learn more about the Azure Percept DK
--
+ Title: Azure Percept DK and Vision device overview
+description: Learn more about the Azure Percept DK and Azure Percept Vision
++ Last updated 03/23/2021
-# Azure Percept DK overview
+# Azure Percept DK and Vision device overview
Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md). Azure Percept DK is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-studio.md
Title: Azure Percept Studio overview description: Learn more about Azure Percept Studio--++ Last updated 03/23/2021
-# Azure Percept Studio Overview
+# Azure Percept Studio overview
[Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI-capable hardware and powerful Azure AI and IoT cloud services.
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept.md
Title: Azure Percept overview description: Learn more about the Azure Percept platform--++ Last updated 03/23/2021
-# Introduction to Azure Percept
+# Azure Percept overview
Azure Percept is a family of hardware, software, and services designed to accelerate business transformation using IoT and AI at the edge. Azure Percept covers the full stack from silicon to services to solve the integration challenges of edge AI at scale.
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-percept-security.md
Title: Azure Percept security overview
+ Title: Azure Percept security
description: Learn more about Azure Percept security -+ Last updated 03/24/2021
-# Azure Percept security overview
+# Azure Percept security
Azure Percept devices are designed with a hardware root of trust. This built-in security helps protect inference data and privacy-sensitive sensors like cameras and microphones and enables device authentication and authorization for Azure Percept Studio services.
azure-percept Overview Update Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-update-experience.md
Title: Azure Percept DK update experience description: Learn more about how to keep the Azure Percept DK up-to-date -+ Last updated 03/24/2021
-# Azure Percept DK update experience overview
+# Azure Percept DK update experience
With Azure Percept DK, you may update your dev kit OS and firmware over-the-air (OTA) or via USB. OTA updating is an easy way keep devices up-to-date through the [Device Update for IoT Hub](../iot-hub-device-update/index.yml) service. USB updates are available for users who are unable to use OTA updates or when a factory reset of the device is needed. Check out the following how-to guides to get started with Azure Percept DK device updates:
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-audio-setup.md
Title: Set up Azure Percept Audio
+ Title: Set up the Azure Percept Audio device
description: Learn how to connect your Azure Percept Audio device to your Azure Percept DK--++ Last updated 03/25/2021
-# Azure Percept Audio setup
+# Set up the Azure Percept Audio device
Azure Percept Audio works out of the box with Azure Percept DK. No unique setup is required.
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
Title: Set up your Azure Percept DK
+ Title: Set up the Azure Percept DK device
description: Set up you Azure Percept DK and connect it to Azure IoT Hub--++ Last updated 03/17/2021
-# Set up your Azure Percept DK
+# Set up the Azure Percept DK device
Complete the Azure Percept DK setup experience to configure your dev kit. After verifying that your Azure account is compatible with Azure Percept, you will:
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-unboxing.md
Title: Unbox and assemble your Azure Percept DK
+ Title: Unbox and assemble the Azure Percept DK device
description: Learn how to unbox, connect, and power on your Azure Percept DK--++ Last updated 02/16/2021
-# Quickstart: unbox and assemble your Azure Percept DK
+# Unbox and assemble the Azure Percept DK device
Once you have received your Azure Percept DK, reference this guide for information on connecting the components and powering on the device.
azure-percept Return To Voice Assistant Application Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/return-to-voice-assistant-application-window.md
Title: Return to your Azure Percept Audio voice assistant application window
+ Title: Find your voice assistant application in Azure Percept Studio
description: This article shows you how to return to a previously created voice assistant application window. --++ Last updated 08/03/2021
-# Return to your voice assistant application window in Azure Percept Studio
+# Find your voice assistant application in Azure Percept Studio
This how-to guide shows you how to return to a previously created voice assistant application.
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/software-releases-over-the-air-updates.md
Title: Software releases for Azure Percept DK OTA updates description: Information and download links for the Azure Percept DK over-the-air update packages -+ Last updated 08/23/2021
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/software-releases-usb-cable-updates.md
Title: Azure Percept DK software releases for update over USB cable description: Information and download links for the USB cable update package of Azure Percept DK -+ Last updated 08/23/2021
azure-percept Speech Module Interface Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/speech-module-interface-workflow.md
Title: Azure Percept speech module interface workflow description: Describes the workflow and available methods for the Azure Percept speech module --++ Last updated 7/19/2021
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Title: Troubleshoot issues with Azure Percept Audio and the speech module
+ Title: Troubleshoot Azure Percept Audio and speech module
description: Get troubleshooting tips for Azure Percept Audio and azureearspeechclientmodule--++ Last updated 08/03/2021
-# Azure Percept Audio and speech module troubleshooting
+# Troubleshoot Azure Percept Audio and speech module
Use the guidelines below to troubleshoot voice assistant application issues.
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
Title: Troubleshoot issues with Azure Percept DK
+ Title: Troubleshoot the Azure Percept DK device
description: Get troubleshooting tips for some of the more common issues with Azure Percept DK and IoT Edge--++ Last updated 08/10/2021
-# Azure Percept DK troubleshooting
+# Troubleshoot the Azure Percept DK device
The purpose of this troubleshooting article is to help Azure Percept DK users to quickly resolve common issues with their dev kits. It also provides guidance on collecting logs for when extra support is needed.
azure-percept Tutorial No Code Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-no-code-speech.md
Title: Create a voice assistant with Azure Percept DK and Azure Percept Audio
+ Title: Create a no-code voice assistant in Azure Percept Studio
description: Learn how to create and deploy a no-code speech solution to your Azure Percept DK--++ Last updated 02/17/2021
-# Create a voice assistant with Azure Percept DK and Azure Percept Audio
+# Create a no-code voice assistant in Azure Percept Studio
In this tutorial, you will create a voice assistant from a template to use with your Azure Percept DK and Azure Percept Audio. The voice assistant demo runs within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and contains a selection of voice-controlled virtual objects. To control an object, say your keyword, which is a word or short phrase that wakes your device, followed by a command. Each template responds to a set of specific commands.
azure-percept Tutorial Nocode Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-nocode-vision.md
Title: Create a no-code vision solution in Azure Percept Studio description: Learn how to create a no-code vision solution in Azure Percept Studio and deploy it to your Azure Percept DK--++ Last updated 02/10/2021
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/vision-solution-troubleshooting.md
Title: Troubleshoot issues with Azure Percept Vision and vision modules
+ Title: Troubleshoot Azure Percept Vision and vision modules
description: Get troubleshooting tips for some of the more common issues found in the vision AI prototyping experiences.--++ Last updated 03/29/2021
-# Vision solution troubleshooting
+# Troubleshoot Azure Percept Vision and vision modules
This article provides information on troubleshooting no-code vision solutions in Azure Percept Studio.
azure-portal Azure Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-overview.md
Title: Azure portal overview description: The Azure portal is a graphical user interface that you can use to manage your Azure services. Learn how to navigate and find resources in the Azure portal. keywords: portal Previously updated : 03/12/2021 Last updated : 08/30/2021
The Azure portal is designed for resiliency and continuous availability. It has
## Azure portal menu
-You can choose the default mode for the portal menu. It can be docked or it can act as a flyout panel.
+You can [choose the default mode for the portal menu](set-preferences.md#set-menu-behavior). It can be docked or it can act as a flyout panel.
When the portal menu is in flyout mode, it's hidden until you need it. Select the menu icon to open or close the menu.
-![Azure portal menu in flyout mode](./media/azure-portal-overview/azure-portal-overview-portal-menu-flyout.png)
If you choose docked mode for the portal menu, it will always be visible. You can collapse the menu to provide more working space.
-![Azure portal menu in docked mode](./media/azure-portal-overview/azure-portal-overview-portal-menu-expandcollapse.png)
## Azure Home
-As a new subscriber to Azure services, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Azure Home**. This page compiles resources that help you get the most from your Azure subscription. We have included links to free online courses, documentation, core services, and useful sites for staying current and managing change for your organization. For quick and easy access to work in progress, we also show a list of your most recently visited resources. You can't customize this page, but you can choose whether to see **Azure Home** or **Azure Dashboard** as your default view. The first time you sign in, there's a prompt at the top of the page where you can save your preference.
+As a new subscriber to Azure services, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Azure Home**. This page compiles resources that help you get the most from your Azure subscription. We include links to free online courses, documentation, core services, and useful sites for staying current and managing change for your organization. For quick and easy access to work in progress, we also show a list of your most recently visited resources.
-![Screenshot showing where to save your preference.](./media/azure-portal-overview/azure-portal-default-view.png)
+You can't customize the Home page, but you can choose whether to see **Home** or **Dashboard** as your default view. The first time you sign in, there's a prompt at the top of the page where you can save your preference. You can [change your startup page selection at any time in **Portal settings**](set-preferences.md#startup-page).
-Both the Azure portal menu and the Azure default view can be changed in **Portal settings**. If you change your selection, the change is immediately applied.
-
-![Screenshot showing default view selector](./media/azure-portal-overview/azure-portal-overview-portal-settings-menu-home.png)
## Azure Dashboard
-Dashboards provide a focused view of the resources in your subscription that matter most to you. We've given you a default dashboard to get you started. You can customize this dashboard to bring the resources you use frequently into a single view. Any changes you make to the default view affect your experience only. However, you can create additional dashboards for your own use or publish your customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+Dashboards provide a focused view of the resources in your subscription that matter most to you. We've given you a default dashboard to get you started. You can customize this dashboard to bring the resources you use frequently into a single view. Any changes you make to the default view affect your experience only. However, you can create additional dashboards for your own use, or publish your customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+
+As noted above, you can [set your startup page to Dashboard](set-preferences.md#startup-page) if you want to see your most recently used [dashboard](azure-portal-dashboards.md) when you sign in to the Azure portal.
## Getting around the portal
It's helpful to understand the basic portal layout and how to interact with it.
The Azure portal menu and page header are global elements that are always present. These persistent features are the "shell" for the user interface associated with each individual service or feature and the header provides access to global controls. The configuration page (sometimes referred to as a "blade") for a resource may also have a resource menu to help you move between features.
-The figure below labels the basic elements of the Azure portal, each of which are described in the following table.
+The figure below labels the basic elements of the Azure portal, each of which are described in the following table. In this example, the current focus is a virtual machine, but the same elements apply no matter what type of resource or service you're working with.
-![Screenshot showing full-screen portal view and key to UI elements](./media/azure-portal-overview/azure-portal-overview-portal-callouts.png)
-![Screenshot showing expanded portal menu](./media/azure-portal-overview/azure-portal-overview-portal-menu-callouts.png)
|Key|Description |::|| |1|Page header. Appears at the top of every portal page and holds global elements.|
-|2| Global search. Use the search bar to quickly find a specific resource, a service, or documentation.|
+|2|Global search. Use the search bar to quickly find a specific resource, a service, or documentation.|
|3|Global controls. Like all global elements, these features persist across the portal and include: Cloud Shell, subscription filter, notifications, portal settings, help and support, and send us feedback.| |4|Your account. View information about your account, switch directories, sign out, or sign in with a different account.|
-|5|Portal menu. The portal menu is a global element that helps you to navigate between services. Sometimes referred to as the sidebar, the portal menu mode can be changed in **Portal settings**.|
-|6|Resource menu. Many services include a resource menu to help you manage the service. You may see this element referred to as the left pane.|
-|7|Command bar. The controls on the command bar are contextual to your current focus.|
-|8|Working pane. Displays the details about the resource that is currently in focus.|
+|5|Azure portal menu. This global element can help you to navigate between services. Sometimes referred to as the sidebar. (Items 9 and 10 in this list appear in this menu.)|
+|6|Resource menu. Many services include a resource menu to help you manage the service. You may see this element referred to as the left pane. Here, you'll see commands that are contextual to your current focus.|
+|7|Command bar. These controls are contextual to your current focus.|
+|8|Working pane. Displays details about the resource that is currently in focus.|
|9|Breadcrumb. You can use the breadcrumb links to move back a level in your workflow.|
-|10|Master control to create a new resource in the current subscription. Expand or open the portal menu to find **+ Create a resource**. Search or browse the Azure Marketplace for the resource type you want to create.|
-|11|Your favorites list. See [Add, remove, and sort favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md) to learn how to customize the list.|
+|10|Master control to create a new resource in the current subscription. Expand or open the Azure portal menu to find **+ Create a resource**. You can also find this option on the **Home** page. Then, search or browse the Azure Marketplace for the resource type you want to create.|
+|11|Your favorites list in the Azure portal menu. To learn how to customize this list, see [Add, remove, and sort favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md).|
## Get started with services If you're a new subscriber, you'll have to create a resource before there's anything to manage. Select **+ Create a resource** to view the services available in the Azure Marketplace. You'll find hundreds of applications and services from many providers here, all certified to run on Azure.
-We pre-populated your Favorites in the sidebar with links to commonly used services. To view all available services, select **All services** from the sidebar.
+We pre-populate your [Favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md) in the sidebar with links to commonly used services. To view all available services, select **All services** from the sidebar.
> [!TIP]
-> The quickest way to find a resource, service, or documentation is to use *Search* in the global header. Use the breadcrumb links to go back to previous pages.
->
-Watch this video for a demo on how to use global search in the Azure portal.
+> The quickest way to find a resource, service, or documentation is to use *Search* in the global header.
+Watch this video for a demo on how to use global search in the Azure portal.
> [!VIDEO https://www.youtube.com/embed/nZ7WwTZcQbo]
Watch this video for a demo on how to use global search in the Azure portal.
## Next steps
-* Learn more about where to run Azure portal in [Supported browsers and devices](../azure-portal/azure-portal-supported-browsers-devices.md)
-* Stay connected on the go with [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/)
-* Onboard and set up your cloud environment with the [Azure Quickstart Center](../azure-portal/azure-portal-quickstart-center.md)
+* Learn more about where to run Azure portal in [Supported browsers and devices](../azure-portal/azure-portal-supported-browsers-devices.md).
+* Stay connected on the go with [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
+* Onboard and set up your cloud environment with the [Azure Quickstart Center](../azure-portal/azure-portal-quickstart-center.md).
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep description: Describes the data types that are available in Bicep Previously updated : 06/01/2021 Last updated : 08/30/2021 # Data types in Bicep
Arrays start with a left bracket (`[`) and end with a right bracket (`]`). In Bi
In an array, each item is represented by the [any type](bicep-functions-any.md). You can have an array where each item is the same data type, or an array that holds different data types.
-Arrays in Bicep are 0-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer may itself be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
+The following example shows an array of integers and an array different types.
```bicep
-var index = 1
-
-var exampleArray = [
+var integerArray = [
1 2 3 ]
-```
-
-String-based indexers are allowed in Bicep.
-```bicep
-param environment string = 'prod'
-
-var environmentSettings = {
- dev: {
- name: 'dev'
- }
- prod: {
- name: 'prod'
- }
-}
+var mixedArray = [
+ resourceGroup().name
+ 1
+ true
+ 'example string'
+]
```
-The expression environmentSettings['dev'] evaluates to the following object:
+Arrays in Bicep are 0-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer may itself be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
```bicep
-{
- name: 'dev'
-}
-```
-
-The following example shows an array with different types.
+var index = 1
-```bicep
-var mixedArray = [
- resourceGroup().name
+var exampleArray = [
1
- true
- 'example string'
+ 2
+ 3
] ```
param exampleObject object = {
} ```
-Property accessors are used to access properties of an object. They're constructed using the `.` operator. For example:
+Property accessors are used to access properties of an object. They're constructed using the `.` operator.
```bicep
-var x = {
- y: {
- z: 'Hello`
- a: true
+var a = {
+ b: 'Dev'
+ c: 42
+ d: {
+ e: true
}
- q: 42
}
-```
-Given the previous declaration, the expression x.y.z evaluates to the literal string 'Hello'. Similarly, the expression x.q evaluates to the integer literal 42.
+output result1 string = a.b // returns 'Dev'
+output result2 int = a.c // returns 42
+output result3 bool = a.d.e // returns true
+```
Property accessors can be used with any object, including parameters and variables of object types and object literals. Using a property accessor on an expression of non-object type is an error.
+You can also use the `[]` syntax to access a property. The following example returns `Development`.
+
+```bicep
+var environmentSettings = {
+ dev: {
+ name: 'Development'
+ }
+ prod: {
+ name: 'Production'
+ }
+}
+
+output accessorResult string = environmentSettings['dev'].name
+```
+ ## Strings In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with codepoints between *0* and *10FFFF* are allowed.
azure-resource-manager Loop Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-modules.md
Last updated 08/27/2021
# Module iteration in Bicep
-This article shows you how to create more than one instance of a [module](modules.md) in your Bicep file. You can add a loop to the `module` section of your file and dynamically set the number of modules to deploy. You also avoid repeating syntax in your Bicep file.
+This article shows you how to deploy more than one instance of a [module](modules.md) in your Bicep file. You can add a loop to a `module` declaration and dynamically set the number of times to deploy that module. You avoid repeating syntax in your Bicep file.
You can also use a loop with [resources](loop-resources.md), [properties](loop-properties.md), [variables](loop-variables.md), and [outputs](loop-outputs.md). ## Syntax
-Loops can be used declare multiple modules by:
+Loops can be used to declare multiple modules by:
-- Iterating over an array.
+- Using a loop index.
```bicep
- module <module-symbolic-name> '<module-file>' = [for <item> in <collection>: {
+ module <module-symbolic-name> '<module-file>' = [for <index> in range(<start>, <stop>): {
<module-properties> }] ```
- You can retrieve the index while iterating through an array:
+ For more information, see [Loop index](#loop-index).
+
+- Iterating over an array.
```bicep
- module <module-symbolic-name> 'module-file' = [for (<item>, <index>) in <collection>: {
+ module <module-symbolic-name> '<module-file>' = [for <item> in <collection>: {
<module-properties> }] ``` -- Using a loop index.
+ For more information, see [Loop array](#loop-array).
+
+- Iterating over an array and index:
```bicep
- module <module-symbolic-name> '<module-file>' = [for <index> in range(<start>, <stop>): {
+ module <module-symbolic-name> 'module-file' = [for (<item>, <index>) in <collection>: {
<module-properties> }] ```
Loops can be used declare multiple modules by:
The Bicep file's loop iterations can't be a negative number or exceed 800 iterations.
-## Module iteration
+## Loop index
-The following example creates the number of modules specified in the `storageCount` parameter. Each module creates a storage account.
+The following example deploys a module the number of times specified in the `storageCount` parameter. Each instance of the module creates a storage account.
```bicep param location string = resourceGroup().location
module stgModule './storageAccount.bicep' = [for i in range(0, storageCount): {
Notice the index `i` is used in creating the storage account resource name. The storage account is passed as a parameter value to the module.
-The following example creates one storage account for each name provided in the `storageNames` parameter by calling a module.
+## Loop array
+
+The following example deploys a module for each name provided in the `storageNames` parameter. The module creates a storage account
```bicep param rgLocation string = resourceGroup().location
module stgModule './storageAccount.bicep' = [for name in storageNames: {
```
-Directly referencing a resource module or module collection is not currently supported in output loops. In order to loop outputs, apply an array indexer to the expression. See an example in [Output iteration](loop-outputs.md#output-iteration).
+Referencing a module collection isn't supported in output loops. To output results from modules in a collection, apply an array indexer to the expression. For more information, see [Output iteration](loop-outputs.md).
## Module iteration with condition
For purely sequential deployment, set the batch size to 1.
## Next steps - For other uses of the loop, see:
- - [Resource iteration in Bicep files](loop-resources.md)
- - [Property iteration in Bicep files](loop-properties.md)
- - [Variable iteration in Bicep files](loop-variables.md)
- - [Output iteration in Bicep files](loop-outputs.md)
-- If you want to learn about the sections of a Bicep file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about how to deploy multiple resources, see [Use Bicep modules](modules.md).
+ - [Resource iteration in Bicep](loop-resources.md)
+ - [Property iteration in Bicep](loop-properties.md)
+ - [Variable iteration in Bicep](loop-variables.md)
+ - [Output iteration in Bicep](loop-outputs.md)
+- For information about modules, see [Use Bicep modules](modules.md).
- To set dependencies on resources that are created in a loop, see [Set resource dependencies](./resource-declaration.md#set-resource-dependencies).-- To learn how to deploy with PowerShell, see [Deploy resources with Bicep and Azure PowerShell](deploy-powershell.md).-- To learn how to deploy with Azure CLI, see [Deploy resources with Bicep and Azure CLI](deploy-cli.md).
azure-resource-manager Loop Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-outputs.md
description: Use a Bicep output loop to iterate and return deployment values.
Previously updated : 06/01/2021 Last updated : 08/30/2021 # Output iteration in Bicep
You can also use a loop with [modules](loop-modules.md), [resources](loop-resour
## Syntax
-Loops can be used to return many items during deployment by:
+Loops can be used to return items during deployment by:
-- Iterating over an array.
+- Using a loop index.
```bicep
- output <output-name> array = [for <item> in <collection>: {
+ output <output-name> array = [for <index> in range(<start>, <stop>): {
<properties> }]- ``` -- Iterating over the elements of an array.
+ For more information, see [Loop index](#loop-index).
+
+- Iterating over an array.
```bicep
- output <output-name> array = [for <item>, <index> in <collection>: {
+ output <output-name> array = [for <item> in <collection>: {
<properties> }]+ ``` -- Using a loop index.
+- Iterating over an array and index.
```bicep
- output <output-name> array = [for <index> in range(<start>, <stop>): {
+ output <output-name> array = [for <item>, <index> in <collection>: {
<properties> }] ```
+ For more information, see [Loop array and index](#loop-array-and-index).
+ ## Loop limits The Bicep file's loop iterations can't be a negative number or exceed 800 iterations.
-## Output iteration
+## Loop index
The following example creates a variable number of storage accounts and returns an endpoint for each storage account.
param storageCount int = 2
var baseNameVar = 'storage${uniqueString(resourceGroup().id)}'
-resource baseName 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in range(0, storageCount): {
+resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in range(0, storageCount): {
name: '${i}${baseNameVar}' location: rgLocation sku: {
resource baseName 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in ran
kind: 'Storage' }]
-output storageEndpoints array = [for i in range(0, storageCount): reference('${i}${baseNameVar}').primaryEndpoints.blob]
+output storageEndpoints array = [for i in range(0, storageCount): stg[i].properties.primaryEndpoints.blob]
``` The output returns an array with the following values:
The output returns an array with the following values:
] ```
-This example returns three properties from the new storage accounts.
+The next example returns three properties from the new storage accounts.
```bicep param rgLocation string = resourceGroup().location
param storageCount int = 2
var baseNameVar = 'storage${uniqueString(resourceGroup().id)}'
-resource baseName 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in range(0, storageCount): {
+resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in range(0, storageCount): {
name: '${i}${baseNameVar}' location: rgLocation sku: {
resource baseName 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in ran
}] output storageInfo array = [for i in range(0, storageCount): {
- id: reference('${i}${baseNameVar}', '2021-02-01', 'Full').resourceId
- blobEndpoint: reference('${i}${baseNameVar}').primaryEndpoints.blob
- status: reference('${i}${baseNameVar}').statusOfPrimary
+ id: stg[i].id
+ blobEndpoint: stg[i].properties.primaryEndpoints.blob
+ status: stg[i].properties.statusOfPrimary
}] ```
The output returns an array with the following values:
```json [ {
- "id": "Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
+ "id": "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
"blobEndpoint": "https://0storagecfrbqnnmpeudi.blob.core.windows.net/", "status": "available" }, {
- "id": "Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
+ "id": "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
"blobEndpoint": "https://1storagecfrbqnnmpeudi.blob.core.windows.net/", "status": "available" } ] ```
-This example uses an array index because direct references to a resource module or module collection aren't supported in output loops.
+## Loop array and index
+
+This example uses both the elements of an array and an index.
```bicep param rgLocation string = resourceGroup().location
The output returns an array with the following values:
[ { "name": "demostg1",
- "resourceId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/demostg1"
+ "resourceId": "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/demostg1"
}, { "name": "demostg2",
- "resourceId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/demostg2"
+ "resourceId": "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/demostg2"
}, { "name": "demostg3",
- "resourceId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/demostg3"
+ "resourceId": "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/demostg3"
} ] ```
The output returns an array with the following values:
## Next steps - For other uses of loops, see:
- - [Resource iteration in Bicep files](loop-resources.md)
- - [Property iteration in Bicep files](loop-properties.md)
- - [Variable iteration in Bicep files](loop-variables.md)
-- If you want to learn about the sections of a Bicep file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about how to deploy multiple resources, see [Use Bicep modules](modules.md).
+ - [Resource iteration in Bicep](loop-resources.md)
+ - [Module iteration in Bicep](loop-modules.md)
+ - [Property iteration in Bicep](loop-properties.md)
+ - [Variable iteration in Bicep](loop-variables.md)
- To set dependencies on resources that are created in a loop, see [Set resource dependencies](./resource-declaration.md#set-resource-dependencies).-- To learn how to deploy with PowerShell, see [Deploy resources with Bicep and Azure PowerShell](deploy-powershell.md).-- To learn how to deploy with Azure CLI, see [Deploy resources with Bicep and Azure CLI](deploy-cli.md).+
azure-resource-manager Loop Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-properties.md
description: Use a Bicep property loop to iterate when creating a resource prope
Previously updated : 06/01/2021 Last updated : 08/30/2021 # Property iteration in Bicep
-This article shows you how to create more than one instance of a property in Bicep file. You can add a loop to a resource's `properties` section and dynamically set the number of items for a property during deployment. You also avoid repeating syntax in your Bicep file.
+This article shows you how to create more than one instance of a property in Bicep file. You can add a loop to a resource's `properties` section and dynamically set the number of items for a property. You avoid repeating syntax in your Bicep file.
You can only use a loop with top-level resources, even when applying a loop to a property. To learn about changing a child resource to a top-level resource, see [Iteration for a child resource](loop-resources.md#iteration-for-a-child-resource).
You can also use a loop with [modules](loop-modules.md), [resources](loop-resour
Loops can be used to declare multiple properties by: -- Iterating over an array.
+- Using a loop index.
```bicep
- <property-name>: [for <item> in <collection>: {
+ <property-name>: [for <index> in range(<start>, <stop>): {
<properties> }] ``` -- Iterating over the elements of an array.
+- Iterating over an array.
```bicep
- <property-name>: [for (<item>, <index>) in <collection>: {
+ <property-name>: [for <item> in <collection>: {
<properties> }] ``` -- Using a loop index.
+ For more information, see [Loop array](#loop-array).
+
+- Iterating over an array and index.
```bicep
- <property-name>: [for <index> in range(<start>, <stop>): {
+ <property-name>: [for (<item>, <index>) in <collection>: {
<properties> }] ```
Loops can be used to declare multiple properties by:
The Bicep file's loop iterations can't be a negative number or exceed 800 iterations.
-## Property iteration
+## Loop array
This example iterates through an array for the `subnets` property to create two subnets within a virtual network.
resource vnet 'Microsoft.Network/virtualNetworks@2020-07-01' = {
## Next steps - For other uses of loops, see:
- - [Resource iteration in Bicep files](loop-resources.md)
- - [Variable iteration in Bicep files](loop-variables.md)
- - [Output iteration in Bicep files](loop-outputs.md)
-- If you want to learn about the sections of a Bicep file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about how to deploy multiple resources, see [Use Bicep modules](modules.md).
+ - [Resource iteration in Bicep](loop-resources.md)
+ - [Module iteration in Bicep](loop-modules.md)
+ - [Variable iteration in Bicep](loop-variables.md)
+ - [Output iteration in Bicep](loop-outputs.md)
- To set dependencies on resources that are created in a loop, see [Set resource dependencies](./resource-declaration.md#set-resource-dependencies).-- To learn how to deploy with PowerShell, see [Deploy resources with Bicep and Azure PowerShell](deploy-powershell.md).-- To learn how to deploy with Azure CLI, see [Deploy resources with Bicep and Azure CLI](deploy-cli.md).
azure-resource-manager Loop Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-resources.md
description: Use loops and arrays in a Bicep file to deploy multiple instances o
Previously updated : 07/19/2021 Last updated : 08/30/2021 # Resource iteration in Bicep
-This article shows you how to create more than one instance of a resource in your Bicep file. You can add a loop to the `resource` section of your file and dynamically set the number of resources to deploy. You also avoid repeating syntax in your Bicep file.
+This article shows you how to create more than one instance of a resource in your Bicep file. You can add a loop to a `resource` declaration and dynamically set the number of resources to deploy. You avoid repeating syntax in your Bicep file.
You can also use a loop with [modules](loop-modules.md), [properties](loop-properties.md), [variables](loop-variables.md), and [outputs](loop-outputs.md).
If you need to specify whether a resource is deployed at all, see [condition ele
Loops can be used declare multiple resources by: -- Iterating over an array.
+- Using a loop index.
```bicep
- resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <item> in <collection>: {
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <index> in range(<start>, <stop>): {
<resource-properties> }] ``` -- Iterating over the elements of an array.
+ For more information, see [Loop index](#loop-index).
+
+- Iterating over an array.
```bicep
- resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for (<item>, <index>) in <collection>: {
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <item> in <collection>: {
<resource-properties> }] ``` -- Using a loop index.
+ For more information, see [Loop array](#loop-array).
+
+- Iterating over an array and index.
```bicep
- resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <index> in range(<start>, <stop>): {
+ resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for (<item>, <index>) in <collection>: {
<resource-properties> }] ```
+ For more information, see [Loop array and index](#loop-array-and-index).
+ ## Loop limits The Bicep file's loop iterations can't be a negative number or exceed 800 iterations.
-## Resource iteration
+## Loop index
The following example creates the number of storage accounts specified in the `storageCount` parameter.
resource storageAcct 'Microsoft.Storage/storageAccounts@2021-02-01' = [for i in
Notice the index `i` is used in creating the storage account resource name.
+## Loop array
+ The following example creates one storage account for each name provided in the `storageNames` parameter. ```bicep
resource storageAcct 'Microsoft.Storage/storageAccounts@2021-02-01' = [for name
If you want to return values from the deployed resources, you can use a loop in the [output section](loop-outputs.md).
+## Loop array and index
+
+The following example uses both the array element and index value when defining the storage account.
+
+```bicep
+param storageAccountNamePrefix string
+
+var storageConfigurations = [
+ {
+ suffix: 'local'
+ sku: 'Standard_LRS'
+ }
+ {
+ suffix: 'geo'
+ sku: 'Standard_GRS'
+ }
+]
+
+resource storageAccountResources 'Microsoft.Storage/storageAccounts@2021-02-01' = [for (config, i) in storageConfigurations: {
+ name: '${storageAccountNamePrefix}${config.suffix}${i}'
+ location: resourceGroup().location
+ properties: {
+ supportsHttpsTrafficOnly: true
+ accessTier: 'Hot'
+ encryption: {
+ keySource: 'Microsoft.Storage'
+
+ blob: {
+ enabled: true
+ }
+ file: {
+ enabled: true
+ }
+ }
+ }
+ }
+ kind: 'StorageV2'
+ sku: {
+ name: config.sku
+ }
+}]
+```
+ ## Resource iteration with condition The following example shows a nested loop combined with a filtered resource loop. Filters must be expressions that evaluate to a boolean value.
resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-02-01
}] ```
-## Example templates
-
-The following examples show common scenarios for creating more than one instance of a resource or property.
-
-|Template |Description |
-|||
-|[Loop storage](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopstorage.bicep) |Deploys more than one storage account with an index number in the name. |
-|[Serial loop storage](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopserialstorage.bicep) |Deploys several storage accounts one at time. The name includes the index number. |
-|[Loop storage with array](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopstoragewitharray.bicep) |Deploys several storage accounts. The name includes a value from an array. |
- ## Next steps - For other uses of the loop, see:
- - [Property iteration in Bicep files](loop-properties.md)
- - [Variable iteration in Bicep files](loop-variables.md)
- - [Output iteration in Bicep files](loop-outputs.md)
-- If you want to learn about the sections of a Bicep file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about how to deploy multiple resources, see [Use Bicep modules](modules.md).
+ - [Property iteration in Bicep](loop-properties.md)
+ - [Variable iteration in Bicep](loop-variables.md)
+ - [Output iteration in Bicep](loop-outputs.md)
- To set dependencies on resources that are created in a loop, see [Set resource dependencies](./resource-declaration.md#set-resource-dependencies).-- To learn how to deploy with PowerShell, see [Deploy resources with Bicep and Azure PowerShell](deploy-powershell.md).-- To learn how to deploy with Azure CLI, see [Deploy resources with Bicep and Azure CLI](deploy-cli.md).
azure-resource-manager Loop Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-variables.md
description: Use Bicep variable loop to iterate when creating a variable.
Previously updated : 06/01/2021 Last updated : 08/30/2021 # Variable iteration in Bicep
-This article shows you how to create more than one value for a variable in your Bicep file. You can add a loop to the `variables` section and dynamically set the number of items for a variable during deployment. You also avoid repeating syntax in your Bicep file.
+This article shows you how to create more than one value for a variable in your Bicep file. You can add a loop to the `variables` declaration and dynamically set the number of items for a variable. You avoid repeating syntax in your Bicep file.
You can also use copy with [modules](loop-modules.md), [resources](loop-resources.md), [properties in a resource](loop-properties.md), and [outputs](loop-outputs.md).
You can also use copy with [modules](loop-modules.md), [resources](loop-resource
Loops can be used to declare multiple variables by: -- Iterating over an array.
+- Using a loop index.
```bicep
- var <variable-name> = [for <item> in <collection>: {
+ var <variable-name> = [for <index> in range(<start>, <stop>): {
<properties> }]- ``` -- Iterating over the elements of an array.
+ For more information, see [Loop index](#loop-index).
+
+- Iterating over an array.
```bicep
- var <variable-name> = [for <item>, <index> in <collection>: {
+ var <variable-name> = [for <item> in <collection>: {
<properties> }]+ ``` -- Using a loop index.
+ For more information, see [Loop array](#loop-array).
+
+- Iterating over an array and index.
```bicep
- var <variable-name> = [for <index> in range(<start>, <stop>): {
+ var <variable-name> = [for <item>, <index> in <collection>: {
<properties> }] ```
Loops can be used to declare multiple variables by:
The Bicep file's loop iterations can't be a negative number or exceed 800 iterations.
-## Variable iteration
+## Loop index
The following example shows how to create an array of string values:
The output returns an array with the following values:
] ```
-## Example templates
+## Loop array
-The following examples show common scenarios for creating more than one value for a variable.
+The following example loops over an array that is passed in as a parameter. The variable constructs objects in the required format from the parameter.
-|Template |Description |
-|||
-|[Loop variables](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopvariables.bicep) | Demonstrates how to iterate on variables. |
-|[Multiple security rules](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/multiplesecurityrules.bicep) |Deploys several security rules to a network security group. It constructs the security rules from a parameter. For the parameter, see [multiple NSG parameter file](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/multiplesecurityrules.parameters.json). |
+```bicep
+@description('An array that contains objects with properties for the security rules.')
+param securityRules array = [
+ {
+ name: 'RDPAllow'
+ description: 'allow RDP connections'
+ direction: 'Inbound'
+ priority: 100
+ sourceAddressPrefix: '*'
+ destinationAddressPrefix: '10.0.0.0/24'
+ sourcePortRange: '*'
+ destinationPortRange: '3389'
+ access: 'Allow'
+ protocol: 'Tcp'
+ }
+ {
+ name: 'HTTPAllow'
+ description: 'allow HTTP connections'
+ direction: 'Inbound'
+ priority: 200
+ sourceAddressPrefix: '*'
+ destinationAddressPrefix: '10.0.1.0/24'
+ sourcePortRange: '*'
+ destinationPortRange: '80'
+ access: 'Allow'
+ protocol: 'Tcp'
+ }
+]
++
+var securityRulesVar = [for rule in securityRules: {
+ name: rule.name
+ properties: {
+ description: rule.description
+ priority: rule.priority
+ protocol: rule.protocol
+ sourcePortRange: rule.sourcePortRange
+ destinationPortRange: rule.destinationPortRange
+ sourceAddressPrefix: rule.sourceAddressPrefix
+ destinationAddressPrefix: rule.destinationAddressPrefix
+ access: rule.access
+ direction: rule.direction
+ }
+}]
+
+resource netSG 'Microsoft.Network/networkSecurityGroups@2020-11-01' = {
+ name: 'NSG1'
+ location: resourceGroup().location
+ properties: {
+ securityRules: securityRulesVar
+ }
+}
+```
## Next steps - For other uses of loops, see:
- - [Resource iteration in Bicep files](loop-resources.md)
- - [Property iteration in Bicep files](loop-properties.md)
- - [Output iteration in Bicep files](loop-outputs.md)
-- If you want to learn about the sections of a Bicep file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about how to deploy multiple resources, see [Use Bicep modules](modules.md).
+ - [Resource iteration in Bicep](loop-resources.md)
+ - [Module iteration in Bicep](loop-modules.md)
+ - [Property iteration in Bicep](loop-properties.md)
+ - [Output iteration in Bicep](loop-outputs.md)
- To set dependencies on resources that are created in a loop, see [Set resource dependencies](./resource-declaration.md#set-resource-dependencies).-- To learn how to deploy with PowerShell, see [Deploy resources with Bicep and Azure PowerShell](deploy-powershell.md).-- To learn how to deploy with Azure CLI, see [Deploy resources with Bicep and Azure CLI](deploy-cli.md).
azure-resource-manager Operators Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators-access.md
description: Describes Bicep resource access operator and property access operat
Previously updated : 07/29/2021 Last updated : 08/30/2021 # Bicep accessor operators
-The accessor operators are used to access child resources and properties on objects. You can also use the property accessor to use some functions.
+The accessor operators are used to access child resources, properties on objects, and elements in an array. You can also use the property accessor to use some functions.
| Operator | Name | | - | - |
+| `[]` | [Index accessor](#index-accessor) |
+| `.` | [Function accessor](#function-accessor) |
| `::` | [Nested resource accessor](#nested-resource-accessor) | | `.` | [Property accessor](#property-accessor) |
-| `.` | [Function accessor](#function-accessor) |
+
+## Index accessor
+
+`array[index]`
+
+`object['index']`
+
+To get an element in an array, use `[index]` and provide an integer for the index.
+
+The following example gets an element in an array.
+
+```bicep
+var arrayVar = [
+ 'Coho'
+ 'Contoso'
+ 'Fabrikan'
+]
+
+output accessorResult string = arrayVar[1]
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| accessorResult | string | 'Contoso' |
+
+You can also use the index accessor to get an object property by name. You must use a string for the index, not an integer. The following example gets a property on an object.
+
+```bicep
+var environmentSettings = {
+ dev: {
+ name: 'Development'
+ }
+ prod: {
+ name: 'Production'
+ }
+}
+
+output accessorResult string = environmentSettings['dev'].name
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| accessorResult | string | 'Development' |
+
+## Function accessor
+
+`resourceName.functionName()`
+
+Two functions - [getSecret](bicep-functions-resource.md#getsecret) and [list*](bicep-functions-resource.md#list) - support the accessor operator for calling the function. These two functions are the only functions that support the accessor operator.
+
+### Example
+
+The following example references an existing key vault, then uses `getSecret` to pass a secret to a module.
+
+```bicep
+resource kv 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
+ name: kvName
+ scope: resourceGroup(subscriptionId, kvResourceGroup )
+}
+
+module sql './sql.bicep' = {
+ name: 'deploySQL'
+ params: {
+ sqlServerName: sqlServerName
+ adminLogin: adminLogin
+ adminPassword: kv.getSecret('vmAdminPassword')
+ }
+}
+```
## Nested resource accessor
resource publicIp 'Microsoft.Network/publicIPAddresses@2020-06-01' = {
output ipFqdn string = publicIp.properties.dnsSettings.fqdn ```
-## Function accessor
-
-`resourceName.functionName()`
-
-Two functions - [getSecret](bicep-functions-resource.md#getsecret) and [list*](bicep-functions-resource.md#list) - support the accessor operator for calling the function. These two functions are the only functions that support the accessor operator.
-
-### Example
-
-The following example references an existing key vault, then uses `getSecret` to pass a secret to a module.
-
-```bicep
-resource kv 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
- name: kvName
- scope: resourceGroup(subscriptionId, kvResourceGroup )
-}
-
-module sql './sql.bicep' = {
- name: 'deploySQL'
- params: {
- sqlServerName: sqlServerName
- adminLogin: adminLogin
- adminPassword: kv.getSecret('vmAdminPassword')
- }
-}
-```
- ## Next steps - To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](./quickstart-create-bicep-use-visual-studio-code.md#deploy-the-bicep-file).
azure-resource-manager Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators.md
description: Describes the Bicep operators available for Azure Resource Manager
Previously updated : 07/29/2021 Last updated : 08/30/2021 # Bicep operators
The accessor operators are used to access nested resources and properties on obj
| Operator | Name | Description | | - | - | - |
+| `[]` | [Index accessor](./operators-access.md#index-accessor) | Access an element of an array or property on an object. |
+| `.` | [Function accessor](./operators-access.md#function-accessor) | Call a function on a resource. |
| `::` | [Nested resource accessor](./operators-access.md#nested-resource-accessor) | Access a nested resource from outside of the parent resource. | | `.` | [Property accessor](./operators-access.md#property-accessor) | Access properties of an object. |
-| `.` | [Function accessor](./operators-access.md#function-accessor) | Call a function on a resource. |
## Comparison
azure-resource-manager Create Private Link Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/create-private-link-access-portal.md
+
+ Title: Create private link for managing resources - Azure portal
+description: Use Azure portal to create private link for managing resources.
+ Last updated : 07/29/2021++
+# Use portal to create private link for managing Azure resources
+
+This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions. It shows using the Azure portal for setting up management of resources through private access.
++
+## Create resource management private link
+
+When you create a resource management private link, the private link association is automatically created for you.
+
+1. In the [portal](https://portal.azure.com), search for **Resource management private links** and select it from the available options.
+
+ :::image type="content" source="./media/create-private-link-access-portal/search.png" alt-text="Search for resource management private links":::
+
+1. If your subscription doesn't already have resource management private links, you'll see a blank page. Select **Create resource management private link**.
+
+ :::image type="content" source="./media/create-private-link-access-portal/start-create.png" alt-text="Select create for resource management private links":::
+
+1. Provide values for the new resource management private link. The root management group for the directory you selected is used for the new resource. Select **Review + create**.
+
+ :::image type="content" source="./media/create-private-link-access-portal/provide-values.png" alt-text="Specify values for resource management private links":::
+
+1. After validation passes, select **Create**.
+
+## Create private endpoint
+
+Now, create a private endpoint that references the resource management private link.
+
+1. Navigate to the **Private Link Center**. Select **Create private link**.
+
+ :::image type="content" source="./media/create-private-link-access-portal/private-link-center.png" alt-text="Select private link center":::
+
+1. In the **Basics** tab, provide values for your private endpoint.
+
+ :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-basics.png" alt-text="Provide values for basics":::
+
+1. In the **Resource** tab, select **Connect to an Azure resource in my directory**. For resource type, select **Microsoft.Authorization/resourceManagementPrivateLinks**. For target subresource, select **ResourceManagement**.
+
+ :::image type="content" source="./media/create-private-link-access-portal/private-endpoint-resource.png" alt-text="Provide values for resource":::
+
+1. In the **Configuration** tab, select your virtual network. We recommend integrating with a private DNS zone. Select **Review + create**.
+
+1. After validation passes, select **Create**.
+
+## Verify private DNS zone
+
+To make sure your environment is properly configured, check the local IP address for the DNS zone.
+
+1. In the resource group where you deployed the private endpoint, select the private DNS zone resource named **privatelink.azure.com**.
+
+1. Verify that the record set named **management** has a valid local IP address.
+
+ :::image type="content" source="./media/create-private-link-access-portal/verify.png" alt-text="Verify local IP address":::
+
+## Next steps
+
+To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Create Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/create-private-link-access-rest.md
+
+ Title: Manage resources through private link
+description: Restrict management access for resource to private link
+ Last updated : 07/29/2021++
+# Use REST API to create private link for managing Azure resources
+
+This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions.
++
+## Create resource management private link
+
+To create resource management private link, send the following request:
+
+```http
+PUT
+https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+```
+
+In the request body, include the location you want for the resource:
+
+```json
+{
+ "location":"{region}"
+}
+```
+
+The operation returns:
+
+```json
+{
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "location": "{region}",
+ "name": "{rmplName}",
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "resourceGroup": "{rgName}",
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks"
+}
+```
+
+Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association.
+
+## Create private link association
+
+To create the private link association, use:
+
+```http
+PUT
+https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Authorization/privateLinkAssociations/{GUID}?api-version=2020-05-01
+```
+
+In the request body, include:
+
+```json
+{
+ "properties": {
+ "privateLink": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}",
+ "publicNetworkAccess": "enabled"
+ }
+}
+```
+
+The operation returns:
+
+```json
+{
+ "id": {plaResourceId},
+ "name": {plaName},
+ "properties": {
+ "privateLink": {rmplResourceId},
+ "publicNetworkAccess": "Enabled",
+ "tenantId": "{tenantId}",
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "type": "Microsoft.Authorization/privateLinkAssociations"
+}
+```
+
+## Add private endpoint
+
+This article assumes you already have a virtual network. In the subnet that will be used for the private endpoint, you must turn off private endpoint network policies. If you haven't turned off private endpoint network policies, see [Disable network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
+
+To create a private endpoint, use the following operation:
+
+```http
+PUT
+https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/privateEndpoints/{privateEndpointName}?api-version=2020-11-01
+```
+
+In the request body, set the `privateServiceLinkId` to the ID from your resource management private link. The `groupIds` must contain `ResourceManagement`. The location of the private endpoint must be the same as the location of the subnet.
+
+```json
+{
+ "location": "westus2",
+ "properties": {
+ "privateLinkServiceConnections": [
+ {
+ "name": "{connection-name}",
+ "properties": {
+ "privateLinkServiceId": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "groupIds": [
+ "ResourceManagement"
+ ]
+ }
+ }
+ ],
+ "subnet": {
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
+ }
+ }
+}
+```
+
+The next step varies depending whether you're using automatic or manual approval. For more information about approval, see [Access to a private link resource using approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
+
+The response includes approval state.
+
+```json
+"privateLinkServiceConnectionState": {
+ "actionsRequired": "None",
+ "description": "",
+ "status": "Approved"
+},
+```
+
+If your request is automatically approved, you can continue to the next section. If your request requires manual approval, wait for the network admin to approve your private endpoint connection.
+
+## Next steps
+
+To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Manage Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-private-link-access-rest.md
+
+ Title: Manage resource management private links
+description: Use REST API to manage existing resource management private links
+ Last updated : 07/29/2021++
+# Manage resource management private links with REST API
+
+This article explains how you to work with existing resource management private links. It shows REST API operations for getting and deleting existing resources.
+
+If you need to create a resource management private link, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
+
+## Resource management private links
+
+To **get a specific** resource management private link, send the following request:
+
+```http
+GET
+https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+```
+
+The operation returns:
+
+```json
+{
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+}
+```
+
+To **get all** resource management private links in a subscription, use:
+
+```http
+GET
+https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
+```
+
+The operation returns:
+
+```json
+[
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ },
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ }
+]
+```
+
+To **delete a specific** resource management private link, use:
+
+```http
+DELETE
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+```
+
+The operation returns: `Status 200 OK`.
+
+## Private link association
+
+To **get a specific** private link association for a management group, use:
+
+```http
+GET
+https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
+```
+
+The operation returns:
+
+```json
+{
+ "value": [
+ {
+ "properties": {
+ "privateLink": {rmplResourceID},
+ "tenantId": {tenantId},
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "id": {plaResourceId},
+ "type": "Microsoft.Authorization/privateLinkAssociations",
+ "name": {plaName}
+ }
+ ]
+}
+```
+
+To **delete** a private link association, use:
+
+```http
+DELETE
+https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations/{plaID}?api-version=2020-05-01
+```
+
+The operation returns: `Status 200 OK`.
+
+## Private endpoints
+
+To **get all** private endpoints in a subscription, use:
+
+```http
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/privateEndpoints?api-version=2020-04-01
+```
+
+The operation returns:
+
+```json
+{
+ "value": [
+ {
+ "name": {privateEndpointName},
+ "id": {privateEndpointResourceId},
+ "etag": {etag},
+ "type": "Microsoft.Network/privateEndpoints",
+ "location": {region},
+ "properties": {
+ "provisioningState": "Updating",
+ "resourceGuid": {GUID},
+ "privateLinkServiceConnections": [
+ {
+ "name": {connectionName},
+ "id": {connectionResourceId},
+ "etag": {etag},
+ "properties": {
+ "provisioningState": "Succeeded",
+ "privateLinkServiceId": {rmplResourceId},
+ "groupIds": [
+ "ResourceManagement"
+ ],
+ "privateLinkServiceConnectionState": {
+ "status": "Approved",
+ "description": "",
+ "actionsRequired": "None"
+ }
+ },
+ "type": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections"
+ }
+ ],
+ "manualPrivateLinkServiceConnections": [],
+ "subnet": {
+ "id": {subnetResourceId}
+ },
+ "networkInterfaces": [
+ {
+ "id": {networkInterfaceResourceId}
+ }
+ ],
+ "customDnsConfigs": [
+ {
+ "fqdn": "management.azure.com",
+ "ipAddresses": [
+ "10.0.0.4"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+* To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
+* To create a resource management private links, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
azure-signalr Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authenticate-managed-identity.md
Follow these instructions to manage role assignments:
![Add button on the toolbar](./media/authenticate/role-assignments-add-button.png) 1. On the **Add role assignment** page, do the following steps:
- 1. Select the **SignalR App Server** as the role. Note that this also applies to **Azure Functions App**.
+ 1. Select the **SignalR Service Owner** as the role.
1. Search to locate the **security principal** (user, group, service principal) to which you want to assign the role. 1. Select **Save** to save the role assignment.
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/management-operations-overview.md
ms.devlang:
- Previously updated : 06/08/2021+ Last updated : 08/20/2021 # Overview of Azure SQL Managed Instance management operations
Subsequent management operations on managed instances may impact the underlying
The duration of operations on the virtual cluster can vary, but typically have the longest duration.
-The following are values that you can typically expect, based on existing service telemetry data:
+The following table lists the long running steps that can be triggered as part of the create, update, or delete operation. Table also lists the durations that you can typically expect, based on existing service telemetry data:
-- **Virtual cluster creation**: Creation is a synchronous step in instance management operations. <br/> **90% of operations finish in 4 hours**.-- **Virtual cluster resizing (expansion or shrinking)**: Expansion is a synchronous step, while shrinking is performed asynchronously (without impact on the duration of instance management operations). <br/>**90% of cluster expansions finish in less than 2.5 hours**.-- **Virtual cluster deletion**: Deletion is an asynchronous step, but it can also be [initiated manually](virtual-cluster-delete.md) on an empty virtual cluster, in which case it executes synchronously. <br/>**90% of virtual cluster deletions finish in 1.5 hours**.
+|Step|Description|Estimated duration|
+||||
+|**Virtual cluster creation**|Creation is a synchronous step in instance management operations.|**90% of operations finish in 4 hours**|
+|**Virtual cluster resizing (expansion or shrinking)**|Expansion is a synchronous step, while shrinking is performed asynchronously (without impact on the duration of instance management operations).|**90% of cluster expansions finish in less than 2.5 hours**|
+|**Virtual cluster deletion**|Virtual cluster deletion can be synchronous and asynchronous. Asynchronous deletion is performed in the background and it is triggered in case of multiple virtual clusters inside the same subnet, when last instance in the non-last cluster in the subnet is deleted. Synchronous deletion of the virtual cluster is triggered as part of the very last instance deletion in the subnet.|**90% of cluster deletions finish in 1.5 hours**|
+|**Seeding database files**<sup>1</sup>|A synchronous step, triggered during compute (vCores), or storage scaling in the Business Critical service tier as well as in changing the service tier from General Purpose to Business Critical (or vice versa). Duration of this operation is proportional to the total database size as well as current database activity (number of active transactions). Database activity when updating an instance can introduce significant variance to the total duration.|**90% of these operations execute at 220 GB/hour or higher**|
+
+<sup>1</sup> When scaling compute (vCores) or storage in Business Critical service tier, or switching service tier from General Purpose to Business Critical, seeding also includes Always On availability group seeding.
-Additionally, management of instances may also include one of the operations on hosted databases, which result in longer durations:
+> [!IMPORTANT]
+> Scaling storage up or down in the General Purpose service tier consists of updating meta data and propagating response for submitted request. It is a fast operation that completes in up to 5 minutes, without a downtime and failover.
-- **Attaching database files from Azure Storage**: A synchronous step, such as scaling compute (vCores), or storage up or down in the General Purpose service tier. <br/>**90% of these operations finish in 5 minutes**.-- **Always On availability group seeding**: A synchronous step, such as compute (vCores), or storage scaling in the Business Critical service tier as well as in changing the service tier from General Purpose to Business Critical (or vice versa). Duration of this operation is proportional to the total database size as well as current database activity (number of active transactions). Database activity when updating an instance can introduce significant variance to the total duration. <br/>**90% of these operations execute at 220 GB/hour or higher**.
+### Management operations long running segments
The following tables summarize operations and typical overall durations, based on the category of the operation:
The following tables summarize operations and typical overall durations, based o
|Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance)|Virtual cluster resizing|90% of operations finish in 2.5 hours.| | | |
-<sup>1</sup> Virtual cluster is built per hardware generation.
+<sup>1</sup> Virtual cluster is built per hardware generation and maintenance window configuration.
**Category: Update** |Operation |Long-running segment |Estimated duration | |||| |Instance property change (admin password, Azure AD login, Azure Hybrid Benefit flag)|N/A|Up to 1 minute.|
-|Instance storage scaling up/down (General Purpose service tier)|No long-running segment<sup>1</sup>|99% of operations finish in 5 minutes.|
+|Instance storage scaling up/down (General Purpose service tier)|No long-running segment|99% of operations finish in 5 minutes.|
|Instance storage scaling up/down (Business Critical service tier)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance compute (vCores) scaling up and down (General Purpose)|- Virtual cluster resizing<br>- Attaching database files|90% of operations finish in 2.5 hours.| |Instance compute (vCores) scaling up and down (Business Critical)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| | | |
-<sup>1</sup> Scaling General Purpose managed instance storage will not cause a failover at the end of operation. In this case operation consists of updating meta data and propagating response for submitted request.
- **Category: Delete** |Operation |Long-running segment |Estimated duration | ||||
-|Instance deletion|Log tail backup for all databases|90% operations finish in up to 1 minute.<br>Note: if the last instance in the subnet is deleted, this operation will schedule virtual cluster deletion after 12 hours.<sup>1</sup>|
-|Virtual cluster deletion (as user-initiated operation)|Virtual cluster deletion|90% of operations finish in up to 1.5 hours.|
+|Non-last instance deletion|Log tail backup for all databases|90% of operations finish in up to 1 minute.<sup>1</sup>|
+|Last instance deletion |- Log tail backup for all databases <br> - Virtual cluster deletion|90% of operations finish in up to 1.5 hours.<sup>2</sup>|
| | |
-<sup>1</sup>12 hours is the current configuration but this is subject to change in the future. If you need to delete a virtual cluster earlier (to release the subnet, for example), see [Delete a subnet after deleting a managed instance](virtual-cluster-delete.md).
+<sup>1</sup> In case of multiple virtual clusters in the subnet, if the last instance in the virtual cluster is deleted, this operation will immediately trigger **asynchronous** deletion of the virtual cluster.
+
+<sup>2</sup> Deletion of last instance in the subnet immediately triggers **synchronous** deletion of the virtual cluster.
+
+> [!IMPORTANT]
+> As soon as delete operation is triggered, billing for SQL Managed Instance is disabled. Duration of the delete operation will not impact the billing.
## Instance availability
Management operations consist of multiple steps. With [Operations API introduced
|Step name |Step description | |-||
-|Request validation | Submitted parameters are validated. In case of misconfiguration operation will fail with an error. |
+|Request validation |Submitted parameters are validated. In case of misconfiguration operation will fail with an error. |
|Virtual cluster resizing / creation |Depending on the state of subnet, virtual cluster goes into creation or resizing. |
-|New SQL instance startup | SQL process is started on deployed virtual cluster. |
+|New SQL instance startup |SQL process is started on deployed virtual cluster. |
|Seeding database files / attaching database files |Depending on the type of the update operation, either database seeding or attaching database files is performed. | |Preparing failover and failover |After data has been seeded or database files reattached, system is being prepared for the failover. When everything is set, failover is performed **with a short downtime**. | |Old SQL instance cleanup |Removing old SQL process from the virtual cluster |
+### Managed instance delete steps
+|Step name |Step description |
+|-||
+|Request validation |Submitted parameters are validated. In case of misconfiguration operation will fail with an error. |
+|SQL instance cleanup |Removing SQL process from the virtual cluster |
+|Virtual cluster deletion |Depending if the instance being deleted is last in the subnet, virtual cluster is synchronously deleted as last step. |
+ > [!NOTE]
-> Once instance scaling is completed, underlying virtual cluster will go through process of releasing unused capacity and possible capacity defragmentation, which could impact instances from the same subnet that did not participate in scaling operation, causing their failover.
+> As a result of scaling instances, underlying virtual cluster will go through process of releasing unused capacity and possible capacity defragmentation, which could impact instances that did not participate in creation / scaling operations.
## Management operations cross-impact
azure-sql Virtual Cluster Delete https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/virtual-cluster-delete.md
Previously updated : 06/26/2019 Last updated : 08/20/2021 # Delete a subnet after deleting an Azure SQL Managed Instance
Last updated 06/26/2019
This article provides guidelines on how to manually delete a subnet after deleting the last Azure SQL Managed Instance residing in it.
-SQL Managed Instances are deployed into [virtual clusters](connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture). Each virtual cluster is associated with a subnet. The virtual cluster persists by design for 12 hours after the last instance deletion to enable you to more quickly create SQL Managed Instances in the same subnet. There's no charge for keeping an empty virtual cluster. During this period, the subnet associated with the virtual cluster can't be deleted.
+SQL Managed Instances are deployed into [virtual clusters](connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture). Each virtual cluster is associated with a subnet and deployed together with first instance creation. In the same way, a virtual cluster is automatically removed together with last instance deletion leaving the subnet empty and ready for removal. There is no need for any manual action on the virtual cluster in order to release the subnet. Once the last virtual cluster is deleted, you can go and delete the subnet
-If you don't want to wait 12 hours and prefer to delete the virtual cluster and its subnet sooner, you can do so manually. Delete the virtual cluster manually by using the Azure portal or the Virtual Clusters API.
+There are rare circumstances in which create operation can fail and result with deployed empty virtual cluster. Additionally, as instance creation [can be canceled](management-operations-cancel.md), it is possible for a virtual cluster to be deployed with instances residing inside, in a failed state. Virtual cluster removal will automatically be initiated in these situations and removed in the background.
+
+> [!NOTE]
+> There are no charges for keeping an empty virtual cluster or instances that have failed to create.
> [!IMPORTANT]
-> - The virtual cluster should contain no SQL Managed Instances for the deletion to be successful.
-> - Deletion of a virtual cluster is a long-running operation lasting for about 1.5 hours (see [SQL Managed Instance management operations](./sql-managed-instance-paas-overview.md#management-operations) for up-to-date virtual cluster delete time). The virtual cluster will still be visible in the portal until this process is completed.
+> - The virtual cluster should contain no SQL Managed Instances for the deletion to be successful. This does not include instances that have failed to create.
+> - Deletion of a virtual cluster is a long-running operation lasting for about 1.5 hours (see [SQL Managed Instance management operations](management-operations-overview.md) for up-to-date virtual cluster delete time). The virtual cluster will still be visible in the portal until this process is completed.
+> - Only one delete operation can be run on the virtual cluster. All subsequent customer-initiated delete requests will result with an error as delete operation is already in progress.
## Delete a virtual cluster from the Azure portal
+> [!IMPORTANT]
+> Starting September 1, 2021. all virtual clusters are automatically removed when last instance in the cluster has been deleted. Manual removal of the virtual cluster is not required anymore.
+ To delete a virtual cluster by using the Azure portal, search for the virtual cluster resources.
-![Screenshot of the Azure portal, with search box highlighted](./media/virtual-cluster-delete/virtual-clusters-search.png)
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Azure portal, with search box highlighted](./media/virtual-cluster-delete/virtual-clusters-search.png)
After you locate the virtual cluster you want to delete, select this resource, and select **Delete**. You're prompted to confirm the virtual cluster deletion.
-![Screenshot of the Azure portal Virtual clusters dashboard, with the Delete option highlighted](./media/virtual-cluster-delete/virtual-clusters-delete.png)
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Azure portal Virtual clusters dashboard, with the Delete option highlighted](./media/virtual-cluster-delete/virtual-clusters-delete.png)
Azure portal notifications will show you a confirmation that the request to delete the virtual cluster has been successfully submitted. The deletion operation itself will last for about 1.5 hours, during which the virtual cluster will still be visible in portal. Once the process is completed, the virtual cluster will no longer be visible and the subnet associated with it will be released for reuse. > [!TIP]
-> If there are no SQL Managed Instances shown in the virtual cluster, and you are unable to delete the virtual cluster, ensure that you do not have an ongoing instance deployment in progress. This includes started and canceled deployments that are still in progress. This is because these operations will still use the virtual cluster, locking it from deletion. Reviewing the **Deployments** tab of the resource group the instance was deployed to will indicate any deployments in progress. In this case, wait for the deployment to complete, delete the SQL Managed Instance, and then delete the virtual cluster.
+> If there are no SQL Managed Instances shown in the virtual cluster, and you are unable to delete the virtual cluster, ensure that you do not have an ongoing instance deployment in progress. This includes started and canceled deployments that are still in progress. This is because these operations will still use the virtual cluster, locking it from deletion. Review the **Deployments** tab of the resource group where the instance was deployed to see any deployments in progress. In this case, wait for the deployment to complete, then delete the SQL Managed Instance. The virtual cluster will be synchronously deleted as part of the instance removal.
## Delete a virtual cluster by using the API
To delete a virtual cluster through the API, use the URI parameters specified in
- Learn about [connectivity architecture in SQL Managed Instance](connectivity-architecture-overview.md). - Learn how to [modify an existing virtual network for SQL Managed Instance](vnet-existing-add-subnet.md). - For a tutorial that shows how to create a virtual network, create an Azure SQL Managed Instance, and restore a database from a database backup, see [Create an Azure SQL Managed Instance (portal)](instance-create-quickstart.md).-- For DNS issues, see [Configure a custom DNS](custom-dns-configure.md).
+- For DNS issues, see [Configure a custom DNS](custom-dns-configure.md).
azure-sql Automated Backup Sql 2014 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-backup-sql-2014.md
You can use PowerShell to configure Automated Backup. Before you begin, you must
[!INCLUDE [updated-for-az.md](../../../../includes/updated-for-az.md)]
-### Install the SQL Server IaaS Extension
-If you provisioned a SQL Server VM from the Azure portal, the SQL Server IaaS Extension should already be installed. You can determine whether it is installed for your VM by calling **Get-AzVM** command and examining the **Extensions** property.
-
-```powershell
-$vmname = "vmname"
-$resourcegroupname = "resourcegroupname"
-
-(Get-AzVM -Name $vmname -ResourceGroupName $resourcegroupname).Extensions
-```
-
-If the SQL Server IaaS Agent extension is installed, you should see it listed as "SqlIaaSAgent" or "SQLIaaSExtension." **ProvisioningState** for the extension should also show "Succeeded."
-
-If it is not installed or it has failed to be provisioned, you can install it with the following command. In addition to the VM name and resource group, you must also specify the region (**$region**) that your VM is located in. Specify the license type for your SQL Server VM, choosing between either pay-as-you-go or bring-your-own-license via the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/). For more information about licensing, see [licensing model](licensing-model-azure-hybrid-benefit-ahb-change.md).
-
-```powershell
-New-AzSqlVM -Name $vmname `
- -ResourceGroupName $resourcegroupname `
- -Location $region -LicenseType <PAYG/AHUB>
-```
-
-> [!IMPORTANT]
-> If the extension is not already installed, installing the extension restarts SQL Server.
### <a id="verifysettings"></a> Verify current settings
azure-sql Automated Patching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-patching.md
s
Set-AzVMSqlServerExtension -AutoPatchingSettings $aps -VMName $vmname -ResourceGroupName $resourcegroupname ```
-> [!IMPORTANT]
-> If the extension is not already installed, installing it restarts SQL Server.
- Based on this example, the following table describes the practical effect on the target Azure VM: | Parameter | Effect |
azure-sql Change Sql Server Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/change-sql-server-version.md
After you change the version of SQL Server, register your SQL Server VM with the
:::image type="content" source="./media/change-sql-server-version/verify-portal.png" alt-text="Verify version"::: > [!NOTE]
-> If you have already registered with the SQL IaaS Agent extension, [unregister from the RP](sql-agent-extension-manually-register-single-vm.md#unregister-from-extension) and then [Register the SQL VM resource](sql-agent-extension-manually-register-single-vm.md#register-with-extension) again so that it detects the correct version and edition of SQL Server that is installed on the VM. This updates the metadata and billing information that is associated with this VM.
+> If you have already registered with the SQL IaaS Agent extension, [unregister from the RP](sql-agent-extension-manually-register-single-vm.md#unregister-from-extension) and then [Register the SQL VM resource](sql-agent-extension-manually-register-single-vm.md#full-mode) again so that it detects the correct version and edition of SQL Server that is installed on the VM. This updates the metadata and billing information that is associated with this VM.
## Remarks
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
vm-windows-sql-server Previously updated : 07/21/2021 Last updated : 09/01/2021 # Documentation changes for SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] Azure allows you to deploy a virtual machine (VM) with an image of SQL Server built in. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/).
+## September 2021
+
+| Changes | Details |
+| | |
+| **SQL IaaS extension full mode no longer requires restart** | Restarting the SQL Server service is no longer necessary when registering your SQL Server VM with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md) in [full mode](sql-agent-extension-manually-register-single-vm.md#full-mode)! |
++ ## July 2021 | Changes | Details |
azure-sql Failover Cluster Instance Azure Shared Disks Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure.md
The FCI data directories need to be on the Azure Shared Disks.
## Register with the SQL VM RP
-To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-management-mode), currently the only mode supported with FCI and SQL Server on Azure VMs.
+To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-mode), currently the only mode supported with FCI and SQL Server on Azure VMs.
Register a SQL Server VM in lightweight mode with PowerShell:
azure-sql Failover Cluster Instance Premium File Share Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure.md
After you've configured the failover cluster, you can create the SQL Server FCI.
## Register with the SQL VM RP
-To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-management-mode), currently the only mode that's supported with FCI and SQL Server on Azure VMs.
+To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-mode), currently the only mode that's supported with FCI and SQL Server on Azure VMs.
Register a SQL Server VM in lightweight mode with PowerShell (-LicenseType can be `PAYG` or `AHUB`):
You can configure a virtual network name, or a distributed network name for a fa
- Filestream isn't supported for a failover cluster with a premium file share. To use filestream, deploy your cluster by using [Storage Spaces Direct](failover-cluster-instance-storage-spaces-direct-manually-configure.md) or [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) instead. - Only registering with the SQL IaaS Agent extension in [lightweight management mode](sql-server-iaas-agent-extension-automate-management.md#management-modes) is supported. - Database Snapshots are not currently supported with [Azure Files due to sparse files limitations](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).-- Running DBCC CHECKDB is not currently supported as Database Snapshots cannot be created.
+- Since database snapshots are not supported, CHECKDB for user databases falls back to CHECKDB WITH TABLOCK. TABLOCK limits the checks that are performed - DBCC CHECKCATALOG is not run on the database, and Service Broker data is not validated.
+- CHECKDB on MASTER and MSDB database is not supported.
- Databases that use the in-memory OLTP feature are not supported on a failover cluster instance deployed with a premium file share. If your business requires in-memory OLTP, consider deploying your FCI with [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) or [Storage Spaces Direct](failover-cluster-instance-storage-spaces-direct-manually-configure.md) instead. ## Next steps
azure-sql Failover Cluster Instance Storage Spaces Direct Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure.md
After you've configured the failover cluster and all cluster components, includi
## Register with the SQL VM RP
-To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-management-mode), currently the only mode that's supported with FCI and SQL Server on Azure VMs.
+To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in [lightweight management mode](sql-agent-extension-manually-register-single-vm.md#lightweight-mode), currently the only mode that's supported with FCI and SQL Server on Azure VMs.
Register a SQL Server VM in lightweight mode with PowerShell:
azure-sql Sql Agent Extension Automatic Registration All Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms.md
vm-windows-sql-server Previously updated : 11/07/2020 Last updated : 9/01/2021 # Automatic registration with SQL IaaS Agent extension
Enable the automatic registration feature in the Azure portal to automatically r
This article teaches you to enable the automatic registration feature. Alternatively, you can [register a single VM](sql-agent-extension-manually-register-single-vm.md), or [register your VMs in bulk](sql-agent-extension-manually-register-vms-bulk.md) with the SQL IaaS Agent extension.
+> [!NOTE]
+> Starting in September 2021, registering with the SQL IaaS extension in full mode no longer requires restarting the SQL Server service.
+ ## Overview Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md) to unlock a full feature set of benefits.
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
ms.devlang: na
vm-windows-sql-server Previously updated : 07/21/2021 Last updated : 09/01/2021
Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md) to unlock a wealth of feature benefits for your SQL Server on Azure VM.
-This article teaches you to register a single SQL Server VM with the SQL IaaS Agent extension. Alternatively, you can register all SQL Server VMs [automatically](sql-agent-extension-automatic-registration-all-vms.md) or [multiple VMs scripted in bulk](sql-agent-extension-manually-register-vms-bulk.md).
+This article teaches you to register a single SQL Server VM with the SQL IaaS Agent extension. Alternatively, you can register all SQL Server VMs in a subscription [automatically](sql-agent-extension-automatic-registration-all-vms.md) or [multiple VMs scripted in bulk](sql-agent-extension-manually-register-vms-bulk.md).
+
+> [!NOTE]
+> Starting in September 2021, registering with the SQL IaaS extension in full mode no longer requires restarting the SQL Server service.
## Overview
-Registering with the [SQL Server IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md) creates the **SQL virtual machine** _resource_ within your subscription, which is a _separate_ resource from the virtual machine resource. Unregistering your SQL Server VM from the extension will remove the **SQL virtual machine** _resource_ but will not drop the actual virtual machine.
+Registering with the [SQL Server IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md) creates the [**SQL virtual machine** _resource_](manage-sql-vm-portal.md) within your subscription, which is a _separate_ resource from the virtual machine resource. Unregistering your SQL Server VM from the extension will remove the **SQL virtual machine** _resource_ but will not drop the actual virtual machine.
Deploying a SQL Server VM Azure Marketplace image through the Azure portal automatically registers the SQL Server VM with the extension. However, if you choose to self-install SQL Server on an Azure virtual machine, or provision an Azure virtual machine from a custom VHD, then you must register your SQL Server VM with the SQL IaaS Agent extension to to unlock full feature benefits and manageability.
-To utilize the SQL IaaS Agent extension, you must first [register your subscription with the **Microsoft.SqlVirtualMachine** provider](#register-subscription-with-resource-provider), which gives the SQL IaaS extension the ability to create resources within that specific subscription.
+To utilize the SQL IaaS Agent extension, you must first [register your subscription with the **Microsoft.SqlVirtualMachine** provider](#register-subscription-with-rp), which gives the SQL IaaS extension the ability to create resources within that specific subscription.
> [!IMPORTANT] > The SQL IaaS Agent extension collects data for the express purpose of giving customers optional benefits when using SQL Server within Azure Virtual Machines. Microsoft will not use this data for licensing audits without the customer's advance consent. See the [SQL Server privacy supplement](/sql/sql-server/sql-server-privacy#non-personal-data) for more information.
To register your SQL Server VM with the extension, you'll need:
- An Azure Resource Model [Windows Server 2008 (or greater) virtual machine](../../../virtual-machines/windows/quick-create-portal.md) with [SQL Server 2008 (or greater)](https://www.microsoft.com/sql-server/sql-server-downloads) deployed to the public or Azure Government cloud. - The latest version of [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell (5.0 minimum)](/powershell/azure/install-az-ps).
-## Register subscription with Resource Provider
+## Register subscription with RP
-To register your SQL Server VM with the SQL IaaS Agent extension, you must first register your subscription with **Microsoft.SqlVirtualMachine** resource provider. This gives the SQL IaaS Agent extension the ability to create resources within your subscription. You can do so by using the Azure portal, the Azure CLI, or Azure PowerShell.
+To register your SQL Server VM with the SQL IaaS Agent extension, you must first register your subscription with the **Microsoft.SqlVirtualMachine** resource provider (RP). This gives the SQL IaaS Agent extension the ability to create resources within your subscription. You can do so by using the Azure portal, the Azure CLI, or Azure PowerShell.
### Azure portal
Register-AzResourceProvider -ProviderNamespace Microsoft.SqlVirtualMachine
-## Register with extension
+## Full mode
+
+To register your SQL Server VM directly in full mode, use the following Azure PowerShell command:
+
+ ```powershell-interactive
+ # Get the existing Compute VM
+ $vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>
+
+ # Register with SQL IaaS Agent extension in full mode
+ New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full
+
+ ```
+
+To learn more about full mode, see [management modes](sql-server-iaas-agent-extension-automate-management.md#management-modes).
+
+### Upgrade to full
+
+SQL Server VMs that have registered the extension in *lightweight* mode can upgrade to _full_ using the Azure portal, the Azure CLI, or Azure PowerShell. SQL Server VMs in _NoAgent_ mode can upgrade to _full_ after the OS is upgraded to Windows 2008 R2 and above. It is not possible to downgrade - to do so, you will need to [unregister](#unregister-from-extension) the SQL Server VM from the SQL IaaS Agent extension. Doing so will remove the **SQL virtual machine** _resource_, but will not delete the actual virtual machine.
+
+#### Azure portal
+
+To upgrade the extension to full mode using the Azure portal, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your [SQL virtual machines](manage-sql-vm-portal.md#access-the-resource) resource.
+1. Select your SQL Server VM, and navigate to the **Overview** page.
+1. For SQL Server VMs with the NoAgent or lightweight IaaS extension mode, select the **Only license type and edition updates are available with the current SQL IaaS extension mode...** message.
+
+ ![Selections for changing the mode from the portal](./media/sql-agent-extension-manually-register-single-vm/change-sql-iaas-mode-portal.png)
+
+1. Select **Confirm** to upgrade your SQL Server IaaS extension mode to full.
+
+ ![Select **Confirm** to upgrade your SQL Server IaaS extension mode to full.](./media/sql-agent-extension-manually-register-single-vm/enable-full-mode-iaas.png)
+
+#### Command line
+
+# [Azure CLI](#tab/bash)
+
+To upgrade the extension to full mode, run the following Azure CLI code snippet:
+
+ ```azurecli-interactive
+ # Update to full mode
+ az sql vm update --name <vm_name> --resource-group <resource_group_name> --sql-mgmt-type full
+ ```
+
+# [Azure PowerShell](#tab/powershell)
+
+To upgrade the extension to full mode, run the following Azure PowerShell code snippet:
+
+ ```powershell-interactive
+ # Get the existing Compute VM
+ $vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>
-There are three management modes for the [SQL Server IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md#management-modes).
+ # Register with SQL IaaS Agent extension in full mode
+ Update-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full -Location $vm.Location
+ ```
-Registering the extension in full management mode restarts the SQL Server service so it's recommended to register the extension in lightweight mode first, and then [upgrade to full](#upgrade-to-full) during a maintenance window.
+
-### Lightweight management mode
+## Lightweight mode
-Use the Azure CLI or Azure PowerShell to register your SQL Server VM with the extension in lightweight mode. This will not restart the SQL Server service. You can then upgrade to full mode at any time, but doing so will restart the SQL Server service so it is recommended to wait until a scheduled maintenance window.
+Use the Azure CLI or Azure PowerShell to register your SQL Server VM with the extension in lightweight mode for limited functionality.
Provide the SQL Server license type as either pay-as-you-go (`PAYG`) to pay per usage, Azure Hybrid Benefit (`AHUB`) to use your own license, or disaster recovery (`DR`) to activate the [free DR replica license](business-continuity-high-availability-disaster-recovery-hadr-overview.md#free-dr-replica-in-azure). Failover cluster instances and multi-instance deployments can only be registered with the SQL IaaS Agent extension in lightweight mode.
+To learn more about lightweight mode, see [management modes](sql-server-iaas-agent-extension-automate-management.md#management-modes).
+ # [Azure CLI](#tab/bash) Register a SQL Server VM in lightweight mode with the Azure CLI:
Register a SQL Server VM in lightweight mode with Azure PowerShell:
New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Location $vm.Location ` -LicenseType <license_type> -SqlManagementType LightWeight ```-
-### Full management mode
-
-Registering your SQL Server VM in full mode will restart the SQL Server service. Please proceed with caution.
-
-To register your SQL Server VM directly in full mode (and possibly restart your SQL Server service), use the following Azure PowerShell command:
-
- ```powershell-interactive
- # Get the existing Compute VM
- $vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>
-
- # Register with SQL IaaS Agent extension in full mode
- New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full
- ```
-
-### NoAgent management mode
+## NoAgent management mode
-SQL Server 2008 and 2008 R2 installed on Windows Server 2008 (_not R2_) can be registered with the SQL IaaS Agent extension in the [NoAgent mode](sql-server-iaas-agent-extension-automate-management.md#management-modes). This option assures compliance and allows the SQL Server VM to be monitored in the Azure portal with limited functionality.
+SQL Server 2008 and 2008 R2 installed on Windows Server 2008 (_not R2_) can only be registered with the SQL IaaS Agent extension in the [NoAgent mode](sql-server-iaas-agent-extension-automate-management.md#management-modes). This option assures compliance and allows the SQL Server VM to be monitored in the Azure portal with limited functionality.
For the **license type**, specify either: `AHUB`, `PAYG`, or `DR`. For the **image offer**, specify either `SQL2008-WS2008` or `SQL2008R2-WS2008`
New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Location $v
+ ## Check extension mode
-Use Azure PowerShell to check what mode your SQL Server IaaS agent extension is in.
+Use Azure PowerShell to check what management mode your SQL Server IaaS agent extension is in.
To check the mode of the extension, use this Azure PowerShell cmdlet:
$sqlvm.SqlManagementType
SQL Server VMs that have registered the extension in *lightweight* mode can upgrade to _full_ using the Azure portal, the Azure CLI, or Azure PowerShell. SQL Server VMs in _NoAgent_ mode can upgrade to _full_ after the OS is upgraded to Windows 2008 R2 and above. It is not possible to downgrade - to do so, you will need to [unregister](#unregister-from-extension) the SQL Server VM from the SQL IaaS Agent extension. Doing so will remove the **SQL virtual machine** _resource_, but will not delete the actual virtual machine.
-> [!NOTE]
-> When you upgrade the management mode for the SQL IaaS extension to full, it will restart the SQL Server service. In some cases, the restart may cause the service principal names (SPNs) associated with the SQL Server service to change to the wrong user account. If you have connectivity issues after upgrading the management mode to full, [unregister and reregister your SPNs](/sql/database-engine/configure-windows/register-a-service-principal-name-for-kerberos-connections).
- ### Azure portal To upgrade the extension to full mode using the Azure portal, follow these steps:
To upgrade the extension to full mode using the Azure portal, follow these steps
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to your [SQL virtual machines](manage-sql-vm-portal.md#access-the-resource) resource. 1. Select your SQL Server VM, and select **Overview**.
-1. For SQL Server VMs with the NoAgent or lightweight IaaS mode, select the **Only license type and edition updates are available with the SQL IaaS extension** message.
+1. For SQL Server VMs with the NoAgent or lightweight IaaS mode, select the **Only license type and edition updates are available with the current SQL IaaS extension...** message.
![Selections for changing the mode from the portal](./media/sql-agent-extension-manually-register-single-vm/change-sql-iaas-mode-portal.png)
-1. Select the **I agree to restart the SQL Server service on the virtual machine** check box, and then select **Confirm** to upgrade your IaaS mode to full.
+1. Select **Confirm** to upgrade your SQL Server extension IaaS mode to full.
- ![Check box for agreeing to restart the SQL Server service on the virtual machine](./media/sql-agent-extension-manually-register-single-vm/enable-full-mode-iaas.png)
+ ![Select **Confirm** to upgrade your SQL Server extension IaaS mode to full](./media/sql-agent-extension-manually-register-single-vm/enable-full-mode-iaas.png)
### Command line
It's possible for your SQL IaaS agent extension to be in a failed state. Use the
![If your provisioning state shows as **Failed**, choose **Repair** to repair the extension. If your state is **Succeeded** you can check the box next to **Force repair** to repair the extension regardless of state.](./media/sql-agent-extension-manually-register-single-vm/force-repair-extension.png) - ## Unregister from extension To unregister your SQL Server VM with the SQL IaaS Agent extension, delete the SQL virtual machine *resource* using the Azure portal or Azure CLI. Deleting the SQL virtual machine *resource* does not delete the SQL Server VM. However, use caution and follow the steps carefully because it is possible to inadvertently delete the virtual machine when attempting to remove the *resource*.
azure-sql Sql Agent Extension Manually Register Vms Bulk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk.md
This article describes how to register your SQL Server virtual machines (VMs) in
This article teaches you to register SQL Server VMs manually in bulk. Alternatively, you can register [all SQL Server VMs automatically](sql-agent-extension-automatic-registration-all-vms.md) or [individual SQL Server VMs manually](sql-agent-extension-manually-register-single-vm.md).
+> [!NOTE]
+> Starting in September 2021, registering with the SQL IaaS extension in full mode no longer requires restarting the SQL Server service.
+ ## Overview
-The `Register-SqlVMs` cmdlet can be used to register all virtual machines in a given list of subscriptions, resource groups, or a list of specific virtual machines. The cmdlet will register the virtual machines in [lightweight_ management mode](sql-server-iaas-agent-extension-automate-management.md#management-modes), and then generate both a [report and a log file](#output-description).
+The `Register-SqlVMs` cmdlet can be used to register all virtual machines in a given list of subscriptions, resource groups, or a list of specific virtual machines. The cmdlet will register the virtual machines in [lightweight management mode](sql-server-iaas-agent-extension-automate-management.md#management-modes), and then generate both a [report and a log file](#output-description).
The registration process carries no risk, has no downtime, and will not restart the SQL Server service or the virtual machine.
The registration process carries no risk, has no downtime, and will not restart
To register your SQL Server VM with the extension, you'll need the following: -- An [Azure subscription](https://azure.microsoft.com/free/) that has been [registered with the **Microsoft.SqlVirtualMachine** provider](sql-agent-extension-manually-register-single-vm.md#register-subscription-with-resource-provider) and contains unregistered SQL Server virtual machines.
+- An [Azure subscription](https://azure.microsoft.com/free/) that has been [registered with the **Microsoft.SqlVirtualMachine** resource provider](sql-agent-extension-manually-register-single-vm.md#register-subscription-with-rp) and contains unregistered SQL Server virtual machines.
- The client credentials used to register the virtual machines exist in any of the following Azure roles: **Virtual Machine contributor**, **Contributor**, or **Owner**. - The latest version of [Az PowerShell (5.0 minimum)](/powershell/azure/new-azureps-module-az).
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
ms.devlang: na
vm-windows-sql-server Previously updated : 11/07/2020 Last updated : 9/01/2021
The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Azu
This article provides an overview of the extension. To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the articles for [Automatic installation](sql-agent-extension-automatic-registration-all-vms.md), [Single VMs](sql-agent-extension-manually-register-single-vm.md), or [VMs in bulk](sql-agent-extension-manually-register-vms-bulk.md).
+> [!NOTE]
+> Starting in September 2021, registering with the SQL IaaS extension in full mode no longer requires restarting the SQL Server service.
+ ## Overview The SQL Server IaaS Agent extension allows for integration with the Azure portal, and depending on the management mode, unlocks a number of feature benefits for SQL Server on Azure VMs:
The following table details these benefits:
You can choose to register your SQL IaaS extension in three management modes: -- **Lightweight** mode copies extension binaries to the VM, but does not install the agent, and does not restart the SQL Server service. Lightweight mode only supports changing the license type and edition of SQL Server and provides limited portal management. Use this option for SQL Server VMs with multiple instances, or those participating in a failover cluster instance (FCI). Lightweight mode is the default management mode when using the [automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature, or when a management type is not specified during manual registration. There is no impact to memory or CPU when using the lightweight mode, and there is no associated cost. It is recommended to register your SQL Server VM in lightweight mode first, and then upgrade to Full mode during a scheduled maintenance window.
+- **Lightweight** mode copies extension binaries to the VM, but does not install the agent. Lightweight mode _only_ supports changing the license type and edition of SQL Server and provides limited portal management. Use this option for SQL Server VMs with multiple instances, or those participating in a failover cluster instance (FCI). Lightweight mode is the default management mode when using the [automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature, or when a management type is not specified during manual registration. There is no impact to memory or CPU when using the lightweight mode, and there is no associated cost.
-- **Full** mode installs the SQL IaaS Agent to the VM to deliver all functionality, but requires a restart of the SQL Server service and system administrator permissions. Use it for managing a SQL Server VM with a single instance. Full mode installs two windows services that have a minimal impact to memory and CPU - these can be monitored through task manager. There is no cost associated with using the full manageability mode.
+- **Full** mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. Full mode installs two windows services that have a minimal impact to memory and CPU - these can be monitored through task manager. There is no cost associated with using the full manageability mode. System administrator permissions are required. As of September 2021, restarting the SQL Server service is no longer necessary when registering your SQL Server VM in full management mode.
- **NoAgent** mode is dedicated to SQL Server 2008 and SQL Server 2008 R2 installed on Windows Server 2008. There is no impact to memory or CPU when using the NoAgent mode. There is no cost associated with using the NoAgent manageability mode, the SQL Server is not restarted, and an agent is not installed to the VM.
You can view the current mode of your SQL Server IaaS agent by using Azure Power
## Installation
-Register your SQL Server VM with the SQL Server IaaS Agent extension to create the **SQL virtual machine** _resource_ within your subscription, which is a _separate_ resource from the virtual machine resource. Unregistering your SQL Server VM from the extension will remove the **SQL virtual machine** _resource_ but will not drop the actual virtual machine.
+Register your SQL Server VM with the SQL Server IaaS Agent extension to create the [**SQL virtual machine** _resource_](manage-sql-vm-portal.md) within your subscription, which is a _separate_ resource from the virtual machine resource. Unregistering your SQL Server VM from the extension will remove the **SQL virtual machine** _resource_ but will not drop the actual virtual machine.
-Deploying a SQL Server VM Azure Marketplace image through the Azure portal automatically registers the SQL Server VM with the extension. However, if you choose to self-install SQL Server on an Azure virtual machine, or provision an Azure virtual machine from a custom VHD, then you must register your SQL Server VM with the SQL IaaS extension to unlock feature benefits.
+Deploying a SQL Server VM Azure Marketplace image through the Azure portal automatically registers the SQL Server VM with the extension in full. However, if you choose to self-install SQL Server on an Azure virtual machine, or provision an Azure virtual machine from a custom VHD, then you must register your SQL Server VM with the SQL IaaS extension to unlock feature benefits.
-Registering the extension in lightweight mode will copy the binaries but not install the agent to the VM. The agent is installed to the VM when the extension is upgraded to full management mode.
+Registering the extension in lightweight mode copies binaries but does not install the agent to the VM. The agent is installed to the VM when the extension is installed in full management mode.
There are three ways to register with the extension: - [Automatically for all current and future VMs in a subscription](sql-agent-extension-automatic-registration-all-vms.md)
Alternatively, to use a named instance with an Azure Marketplace SQL Server imag
1. [Unregister](sql-agent-extension-manually-register-single-vm.md#unregister-from-extension) the SQL Server VM from the SQL IaaS Agent extension. 1. Uninstall SQL Server completely within the SQL Server VM. 1. Install SQL Server with a named instance within the SQL Server VM.
- 1. [Register the VM with the SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md#register-with-extension).
+ 1. [Register the VM with the SQL IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md#full-mode).
## Verify status of extension
The SQL IaaS Agent extension only supports:
## In-region data residency+ Azure SQL virtual machine and the SQL IaaS Agent Extension do not move or store customer data out of the region in which they are deployed. ## Next steps
azure-sql Sql Vm Create Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-vm-create-powershell-quickstart.md
If you don't have an Azure subscription, create a [free account](https://azure.m
To get portal integration and SQL VM features, you must register with the [SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).
-To get full functionality, you will need to register with the extension in full mode. However, doing so restarts the SQL Server service, so the recommended approach is to register in lightweight mode and then upgrade to full during a maintenance window.
-
-First, register your SQL Server VM in lightweight mode:
-
-```powershell-interactive
-# Get the existing compute VM
-$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>
-
-# Register SQL VM with 'Lightweight' SQL IaaS agent
-New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Location $vm.Location `
- -LicenseType PAYG -SqlManagementType LightWeight
-```
-
-Then during a maintenance window, upgrade to full mode:
-
-```powershell-interactive
-# Get the existing Compute VM
-$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>
-
-# Register with SQL IaaS Agent extension in full mode
-Update-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full
-```
-
+To get full functionality, you need to register with the extension in [full mode](sql-agent-extension-manually-register-single-vm.md#full-mode). Otherwise, register in lightweight mode.
## Remote desktop into the VM
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
Use the [Wait-AzRecoveryServicesBackupJob](/powershell/module/az.recoveryservice
Wait-AzRecoveryServicesBackupJob -Job $restorejob -Timeout 43200 ```
-Once the Restore job has completed, use the [Get-AzRecoveryServicesBackupJobDetails](/powershell/module/az.recoveryservices/wait-azrecoveryservicesbackupjob) cmdlet to get the details of the restore operation. The JobDetails property has the information needed to rebuild the VM.
+Once the Restore job has completed, use the [Get-AzRecoveryServicesBackupJobDetail](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjobdetail) cmdlet to get the details of the restore operation. The JobDetails property has the information needed to rebuild the VM.
```powershell $restorejob = Get-AzRecoveryServicesBackupJob -Job $restorejob -VaultId $targetVault.ID
-$details = Get-AzRecoveryServicesBackupJobDetails -Job $restorejob -VaultId $targetVault.ID
+$details = Get-AzRecoveryServicesBackupJobDetail -Job $restorejob -VaultId $targetVault.ID
``` #### Restore selective disks
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding rol
| Management Operation | Role Required | Resources | | | | | | Enable backup of Azure File shares | Backup Contributor |Recovery Services vault |
-| |Storage Account | Contributor Storage account resource |
+| | Storage Account Backup Contributor | Storage account resource |
| On-demand backup of VM | Backup Operator | Recovery Services vault | | Restore File share | Backup Operator | Recovery Services vault |
-| | Storage Account Contributor | Storage account resources where restore source and Target file shares are present |
+| | Storage Account Backup Contributor | Storage account resources where restore source and Target file shares are present |
| Restore Individual Files | Backup Operator | Recovery Services vault | | |Storage Account Contributor|Storage account resources where restore source and Target file shares are present | | Stop protection |Backup Contributor | Recovery Services vault |
backup Restore Azure Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-azure-encrypted-virtual-machines.md
+
+ Title: Restore encrypted Azure VMs
+description: Describes how to restore encrypted Azure VMs with the Azure Backup service.
+ Last updated : 08/20/2021+
+# Restore encrypted Azure virtual machines
+
+This article describes how to restore Windows or Linux Azure virtual machines (VMs) with encrypted disks using the [Azure Backup](backup-overview.md) service. For more information, see [Encryption of Azure VM backups](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
++
+## Before you start
+
+Review the known limitations before you start restore of an encrypted VM
+
+- You can back up and restore ADE encrypted VMs within the same subscription.
+- Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported.
+- ADE encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders.
+- When restoring a VM, you can't use the [replace existing VM](backup-azure-arm-restore-vms.md#restore-options) option for ADE encrypted VMs. This option is only supported for unencrypted managed disks.
++
+## Restore an encrypted VM
+
+Encrypted VMs can only be restored by restoring the VM disk and creating a virtual machine instance as explained below. **Replace existing disk on the existing VM**, **creating a VM from restore points** and **files or folder level restore** are currently not supported.
+
+Follow below steps to restore encrypted VMs:
+
+### **Step 1**: Restore the VM disk
+
+1. In **Restore configuration** > **Create new** > **Restore Type** select **Restore disks**.
+1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name.
+1. In **Staging location**, specify the storage account to which the VHDs should be copied. [Learn more](backup-azure-arm-restore-vms.md#storage-accounts).
+
+ ![Select Resource group and Staging location](./media/backup-azure-arm-restore-vms/trigger-restore-operation1.png)
+
+1. Select **Restore** to trigger the restore operation.
+
+When your virtual machine uses managed disks and you select the **Create virtual machine** option, Azure Backup doesn't use the specified storage account. In the case of **Restore disks** and **Instant Restore**, the storage account is used only for storing the template. Managed disks are created in the specified resource group.
+When your virtual machine uses unmanaged disks, they're restored as blobs to the storage account.
+
+ > [!NOTE]
+ > After you restore the VM disk, you can manually swap the OS disk of the original VM with the restored VM disk without re-creating it. [Learn more](https://azure.microsoft.com/blog/os-disk-swap-managed-disks/).
+
+### **Step 2**: Recreate the virtual machine instance
+
+Do one of the following actions:
+
+- Use the template that's generated during the restore operation to customize VM settings and trigger VM deployment. [Learn more](backup-azure-arm-restore-vms.md#use-templates-to-customize-a-restored-vm).
+ >[!NOTE]
+ >While deploying the template, verify the storage account containers and the public/private settings.
+- Create a new VM from the restored disks using PowerShell. [Learn more](backup-azure-vms-automation.md#create-a-vm-from-restored-disks).
+
+### **Step 3**: Restore an encrypted Linux VM
+
+Reinstall the ADE extension so the data disks are open and mounted.
+
+## Cross Region Restore for an encrypted Azure VM
+
+Azure Backup supports Cross Region Restore of encrypted Azure VMs to the [Azure paired regions](../best-practices-availability-paired-regions.md). Learn how to [enable Cross Region Restore](backup-create-rs-vault.md#configure-cross-region-restore) for an encrypted VM.
+
+## Move an encrypted Azure VM
+
+Moving an encrypted VM across vault or resource group is same as moving a backed up Azure Virtual machine. See,
+
+- [Steps to move an Azure virtual machine to a different recovery service vault](backup-azure-move-recovery-services-vault.md#move-an-azure-virtual-machine-to-a-different-recovery-service-vault)
+- [Steps to move an Azure virtual machine to different resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
++
+## Next steps
+
+If you run into any issues, review these articles:
+
+- [Common errors](backup-azure-vms-troubleshoot.md) when backing up and restoring encrypted Azure VMs.
+- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
+++
bastion Configure Host Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configure-host-scaling.md
Previously updated : 07/13/2021 Last updated : 08/30/2021 # Customer intent: As someone with a networking background, I want to configure host scaling.
This article helps you add additional scale units (instances) to Azure Bastion i
## Configuration steps -
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
1. In the Azure portal, navigate to your Bastion host. 1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard** from the dropdown.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
Previously updated : 07/13/2021 Last updated : 08/30/2021 # Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
You can use the following example values when creating this configuration, or yo
There are a few different ways to configure a bastion host. In the following steps, you'll create a bastion host in the Azure portal directly from your VM. When you create a host from a VM, various settings will automatically populate corresponding to your virtual machine and/or virtual network. -
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
1. Navigate to the VM that you want to connect to, then select **Connect**. :::image type="content" source="./media/quickstart-host-portal/vm-connect.png" alt-text="Screenshot of virtual machine settings." lightbox="./media/quickstart-host-portal/vm-connect.png":::
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
Previously updated : 07/13/2021 Last updated : 08/30/2021
You can use the following example values when creating this configuration, or yo
| Public IP address SKU | Standard | | Assignment | Static |
-## Sign in to the Azure portal
--
-Sign in to the Azure portal.
- ## <a name="createhost"></a>Create a bastion host This section helps you create the bastion object in your VNet. This is required in order to create a secure connection to a VM in the VNet.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
1. Type **Bastion** into the search. 1. Under services, click **Bastions**. 1. On the Bastions page, click **+ Create** to open the **Create a Bastion** page.
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/upgrade-sku.md
Previously updated : 07/13/2021 Last updated : 08/30/2021 # Customer intent: As someone with a networking background, I want to upgrade to the Standard SKU.
This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you u
## Configuration steps -
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
1. In the Azure portal, navigate to your Bastion host. 1. On the **Configuration** page, for **Tier**, select **Standard** from the dropdown.
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 08/10/2021 Last updated : 08/27/2021 # Choose a VM size and image for compute nodes in an Azure Batch pool
-When you select a node size for an Azure Batch pool, you can choose from among almost all the VM sizes available in Azure. Azure offers a range of sizes for Linux and Windows VMs for different workloads.
+When you select a node size for an Azure Batch pool, you can choose from almost all the VM sizes available in Azure. Azure offers a range of sizes for Linux and Windows VMs for different workloads.
## Supported VM series and sizes
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/troubleshooting.md
Cloud Shell is intended for interactive use cases. As a result, any long-running
Permissions are set as regular users without sudo access. Any installation outside your `$Home` directory is not persisted.
+### Supported entry point limitations
+
+Cloud Shell entry points beside the Azure portal, such as Visual Studio Code & Windows Terminal, do not support the use of commands that modify UX components in Cloud Shell, such as `Code`.
+ ## Bash limitations ### Editing .bashrc
Azure Cloud Shell takes your personal data seriously, the data captured and stor
### Export In order to **export** the user settings Cloud Shell saves for you such as preferred shell, font size, and font type run the following commands.
-1. [![Image showing a button labeled Launch Azure Cloud Shell.](https://shell.azure.com/images/launchcloudshell.png)](https://shell.azure.com)
+1. Launch Cloud Shell.
2. Run the following commands in Bash or PowerShell:
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
The Cognitive Search instance can be isolated via a private endpoint after the Q
> [ ![Screenshot of networking UI with public/private toggle button]( ../media/network-isolation/private.png) ]( ../media/network-isolation/private.png#lightbox) 4. Once the Search resource is switched to private, select add **private endpoint**.
- - **Basic tab**: make sure you are creating your endpoint in the same region as search resource.
+ - **Basics tab**: make sure you are creating your endpoint in the same region as search resource.
- **Resource tab**: select the required search resource of type `Microsoft.Search/searchServices`. > [!div class="mx-imgBorder"]
The Cognitive Search instance can be isolated via a private endpoint after the Q
> [!div class="mx-imgBorder"] > [ ![Screenshot of create private endpoint UI window with subnet field populated]( ../media/network-isolation/subnet.png) ]( ../media/network-isolation/subnet.png#lightbox)
- 5. Enable VNET integration for the regular App Service. You can skip this step for ASE, as that already has access to the VNET.
- - Go to App Service **Networking** section, and open **VNet Integration**.
- - Link to the dedicated App Service VNet, Subnet (appservicevnet) created in step 2.
+5. Enable VNET integration for the regular App Service. You can skip this step for ASE, as that already has access to the VNET.
+ - Go to App Service **Networking** section, and open **VNet Integration**.
+ - Link to the dedicated App Service VNet, Subnet (appservicevnet) created in step 2.
> [!div class="mx-imgBorder"] > [ ![Screenshot of VNET integration UI]( ../media/network-isolation/integration.png) ]( ../media/network-isolation/integration.png#lightbox)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Previously updated : 08/10/2021 Last updated : 08/27/2021 keywords: on-premises, Docker, container
Speech containers enable customers to build a speech application architecture th
| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.13.0 | Generally Available | | Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.13.0 | Generally Available | | Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.14.1 | Generally Available |
-| Speech Language Identification | Detect the language spoken in audio files. | 1.3.0 | Gated preview |
+| Speech Language Identification | Detect the language spoken in audio files. | 1.3.0 | preview |
| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.8.0 | Generally Available |
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
- ## Prerequisites > [!IMPORTANT]
-> To use the speech containers you must submit an online request, and have it approved. See the **Request approval to the run the container** section below for more information.
+> * To use the speech containers you must submit an online request, and have it approved. See the **Request approval to the run the container** section below for more information.
+> * *Generally Available* containers meet Microsoft's stability and support requirements. Containers in *Preview* are still under development.
You must meet the following prerequisites before using Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 08/09/2021 Last updated : 08/27/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
* Version `3.2-preview.1` which includes a public preview for [extractive summarization](how-tos/extractive-summarization.md). * [Asynchronous operation](how-tos/text-analytics-how-to-call-api.md?tabs=asynchronous) is now available in the Azure Government and Azure China regions.
-* New preview versions of the client library, with support for extractive summarization. See the following samples:
- * [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_ExtractSummary.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/AnalyzeExtractiveSummarization.java)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py)
+* New preview versions of the client library, with support for extractive summarization. [See the quickstart](quickstarts/client-libraries-rest-api.md) for more information.
## July 2021
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-periodic-backup-restore.md
description: This article describes how to configure Azure Cosmos DB accounts wi
Previously updated : 07/21/2021 Last updated : 08/30/2021
Backup data in Azure Cosmos DB is replicated three times in the primary region.
* **Locally-redundant backup storage:** This option copies your data asynchronously three times within a single physical location in the primary region. > [!NOTE]
-> Zone-redundant storage is currently available only in [specific regions](high-availability.md#availability-zone-support). Based on the region you select; this option will not be available for new or existing accounts.
+> Zone-redundant storage is currently available only in [specific regions](high-availability.md#availability-zone-support). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available.
> > Updating backup storage redundancy will not have any impact on backup storage pricing.
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
Previously updated : 08/26/2021 Last updated : 08/30/2021
The dedicated gateway has the following limitations during the public preview:
- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](high-availability.md#availability-zone-support) enabled. - You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
+## Supported regions
+
+The dedicated gateway is in public preview and isn't supported in every Azure region yet. Throughout the public preview, we'll be adding new capacity. We won't have region restrictions when the dedicated gateway is generally available.
+
+Current list of supported Azure regions:
+
+| **Americas** | **Europe and Africa** | **Asia Pacific** |
+| | -- | -- |
+| Brazil South | France Central | Australia Central |
+| Canada Central | France South | Australia Central 2 |
+| Canada East | Germany North | Australia Southeast |
+| Central US | Germany West Central | Central India |
+| East US | North Europe | East Asia |
+| East US 2 | Switzerland North | Japan West |
+| North Central US | UK South | Korea Central |
+| South Central US | UK West | Korea South |
+| West Central US | West Europe | Southeast Asia |
+| West US | | UAE Central |
+| West US 2 | | West India |
++ ## Next steps Read more about dedicated gateway usage in the following articles:
cosmos-db Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/plan-manage-costs.md
As an aid for estimating costs, it can be helpful to do capacity planning for a
### Estimate provisioned throughput costs
-If you plan to use Azure Cosmos DB in provisioned throughput mode, use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to estimate costs before you create the resources in an Azure Cosmos account. The capacity calculator is used to get an estimate of the required throughput and cost of your workload. Configuring your Azure Cosmos databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](request-units.md), for your workload is essential to optimize the cost and performance. You have to input details such as API type, number of regions, item size, read/write requests per second, total data stored to get a cost estimate. To learn more about the capacity calculator, see the [estimate](estimate-ru-with-capacity-planner.md) article.
+If you plan to use Azure Cosmos DB in provisioned throughput mode, use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to estimate costs before you create the resources in an Azure Cosmos account. The capacity calculator is used to get an estimate of the required throughput and cost of your workload. The capacity calculator is currently available for SQL API, Cassandra API, and API for MongoDB only.
+
+Configuring your Azure Cosmos databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](request-units.md), for your workload is essential to optimize the cost and performance. You have to input details such as API type, number of regions, item size, read/write requests per second, total data stored to get a cost estimate. To learn more about the capacity calculator, see the [estimate](estimate-ru-with-capacity-planner.md) article.
The following screenshot shows the throughput and cost estimation by using the capacity calculator:
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/best-practice-dotnet.md
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
|<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) | |<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) | |<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
-|&#10003; | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
+|<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. | | <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. | | <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) |
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/javascript-query-api.md
In addition to issuing queries using the SQL API in Azure Cosmos DB, the [Cosmos
|`pluck([propertyName] [, options] [, callback])`|This function is a shortcut for a map that extracts the value of a single property from each input item.| |`sortBy([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in ascending order by using the given predicate. This function behaves similar to an ORDER BY clause in SQL.| |`sortByDescending([predicate] [, options] [, callback])`|Produces a new set of documents by sorting the documents in the input document stream in descending order using the given predicate. This function behaves similar to an ORDER BY x DESC clause in SQL.|
-|`unwind(collectionSelector, [resultSelector], [options], [callback])`|Performs a self-join with inner array and adds results from both sides as tuples to the result projection. For instance, joining a person document with person.pets would produce [person, pet] tuples. This is similar to SelectMany in .NET LINK.|
+|`unwind(collectionSelector, [resultSelector], [options], [callback])`|Performs a self-join with inner array and adds results from both sides as tuples to the result projection. For instance, joining a person document with person.pets would produce [person, pet] tuples. This is similar to SelectMany in .NET LINQ.|
When included inside predicate and/or selector functions, the following JavaScript constructs get automatically optimized to run directly on Azure Cosmos DB indices:
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/manage-with-powershell.md
There are four types of keys for an Azure Cosmos account (Primary, Secondary, Pr
```azurepowershell-interactive $resourceGroupName = "myResourceGroup" # Resource Group must already exist $accountName = "mycosmosaccount" # Must be all lower case
-$keyKind = "primary" # Other key kinds: secondary, primaryReadOnly, secondaryReadOnly
+$keyKind = "primary" # Other key kinds: secondary, primaryReadonly, secondaryReadonly
New-AzCosmosDBAccountKey ` -ResourceGroupName $resourceGroupName `
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
Previously updated : 03/17/2021 Last updated : 08/30/2021 # Copy data from Amazon Simple Storage Service by using Azure Data Factory
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Blob storage by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
Previously updated : 07/19/2020 Last updated : 08/30/2021 # Copy data to or from Azure Data Explorer by using Azure Data Factory
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data to and from Azure Databricks Delta Lake using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 06/15/2021 Last updated : 08/30/2021 # Copy and transform data in Azure SQL Managed Instance by using Azure Data Factory
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-table-storage.md
Previously updated : 03/17/2021 Last updated : 08/30/2021 # Copy data to and from Azure Table storage by using Azure Data Factory
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-db2.md
Previously updated : 05/26/2020 Last updated : 08/30/2021 # Copy data from DB2 by using Azure Data Factory
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
Previously updated : 08/24/2021 Last updated : 08/30/2021
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-bigquery.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Google BigQuery by using Azure Data Factory
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from an HTTP endpoint by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
Previously updated : 03/30/2021 Last updated : 08/30/2021 # Copy data from an OData source by using Azure Data Factory
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
Previously updated : 08/24/2021 Last updated : 08/30/2021
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Previously updated : 08/24/2021 Last updated : 08/30/2021
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
Title: Copy data from and to a REST endpoint by using Azure Data Factory
+ Title: Copy and transform data from and to a REST endpoint by using Azure Data Factory
-description: Learn how to copy data from a cloud or on-premises REST source to supported sink data stores, or from supported source data store to a REST sink by using a copy activity in an Azure Data Factory pipeline.
+description: Learn how to use Copy Activity to copy data and use Data Flow to transform data from a cloud or on-premises REST source to supported sink data stores, or from supported source data store to a REST sink in Azure Data Factory or Azure Synapse Analytics pipelines.
Previously updated : 08/24/2021 Last updated : 08/30/2021
-# Copy data from and to a REST endpoint by using Azure Data Factory
+# Copy and transform data from and to a REST endpoint by using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to a REST endpoint. The article builds on [Copy Activity in Azure Data Factory](copy-activity-overview.md), which presents a general overview of Copy Activity.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-hana.md
Previously updated : 04/22/2020 Last updated : 08/30/2021 # Copy data from SAP HANA using Azure Data Factory
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from and to the SFTP server using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy data from SharePoint Online List by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Previously updated : 08/24/2021 Last updated : 08/30/2021 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-azure-cli.md
Previously updated : 03/24/2021 Last updated : 08/27/2021
This quickstart uses an Azure Storage account, which includes a container with a
## Create a data factory
-To create an Azure data factory, run the [az datafactory factory create](/cli/azure/datafactory#az_datafactory_create) command:
+To create an Azure data factory, run the [az datafactory create](/cli/azure/datafactory#az_datafactory_create) command:
```azurecli
-az datafactory factory create --resource-group ADFQuickStartRG \
+az datafactory create --resource-group ADFQuickStartRG \
--factory-name ADFTutorialFactory ``` > [!IMPORTANT] > Replace `ADFTutorialFactory` with a globally unique data factory name, for example, ADFTutorialFactorySP1127.
-You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/datafactory#az_datafactory_factory_show) command:
+You can see the data factory that you created by using the [az datafactory show](/cli/azure/datafactory#az_datafactory_factory_show) command:
```azurecli
-az datafactory factory show --resource-group ADFQuickStartRG \
+az datafactory show --resource-group ADFQuickStartRG \
--factory-name ADFTutorialFactory ```
Next, create a linked service and two datasets.
1. In your working directory, create a JSON file with this content, which includes your own connection string from the previous step. Name the file `AzureStorageLinkedService.json`:
- ```json
- {
- "type":"AzureStorage",
- "typeProperties":{
- "connectionString":{
- "type": "SecureString",
- "value":"DefaultEndpointsProtocol=https;AccountName=adfquickstartstorage;AccountKey=K9F4Xk/EhYrMBIR98rtgJ0HRSIDU4eWQILLh2iXo05Xnr145+syIKNczQfORkQ3QIOZAd/eSDsvED19dAwW/tw==;EndpointSuffix=core.windows.net"
- }
- }
- }
- ```
+ ```json
+ {
+ "type": "AzureBlobStorage",
+ "typeProperties": {
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>;EndpointSuffix=core.windows.net"
+ }
+ }
+ ```
1. Create a linked service, named `AzureStorageLinkedService`, by using the [az datafactory linked-service create](/cli/azure/datafactory/linked-service#az_datafactory_linked_service_create) command:
Next, create a linked service and two datasets.
1. In your working directory, create a JSON file with this content, named `InputDataset.json`:
- ```json
- {
- "type":
- "AzureBlob",
- "linkedServiceName": {
- "type":"LinkedServiceReference",
- "referenceName":"AzureStorageLinkedService"
- },
- "annotations": [],
- "type": "Binary",
- "typeProperties": {
- "location": {
- "type": "AzureBlobStorageLocation",
- "fileName": "emp.txt",
- "folderPath": "input",
- "container": "adftutorial"
- }
- }
- }
- ```
+ ```json
+ {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": "emp.txt",
+ "folderPath": "input",
+ "container": "adftutorial"
+ }
+ }
+ }
+ ```
1. Create an input dataset named `InputDataset` by using the [az datafactory dataset create](/cli/azure/datafactory/dataset#az_datafactory_dataset_create) command:
Next, create a linked service and two datasets.
1. In your working directory, create a JSON file with this content, named `OutputDataset.json`:
- ```json
- {
- "type":
- "AzureBlob",
- "linkedServiceName": {
- "type":"LinkedServiceReference",
- "referenceName":"AzureStorageLinkedService"
- },
- "annotations": [],
- "type": "Binary",
- "typeProperties": {
- "location": {
- "type": "AzureBlobStorageLocation",
- "fileName": "emp.txt",
- "folderPath": "output",
- "container": "adftutorial"
- }
- }
- }
- ```
+ ```json
+ {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "folderPath": "output",
+ "container": "adftutorial"
+ }
+ }
+ }
+ ```
1. Create an output dataset named `OutputDataset` by using the [az datafactory dataset create](/cli/azure/datafactory/dataset#az_datafactory_dataset_create) command:
Finally, create and run the pipeline.
1. In your working directory, create a JSON file with this content named `Adfv2QuickStartPipeline.json`:
- ```json
- {
- "name": "Adfv2QuickStartPipeline",
- "properties": {
- "activities": [
- {
- "name": "CopyFromBlobToBlob",
- "type": "Copy",
- "dependsOn": [],
- "policy": {
- "timeout": "7.00:00:00",
- "retry": 0,
- "retryIntervalInSeconds": 30,
- "secureOutput": false,
- "secureInput": false
- },
- "userProperties": [],
- "typeProperties": {
- "source": {
- "type": "BinarySource",
- "storeSettings": {
- "type": "AzureBlobStorageReadSettings",
- "recursive": true
- }
- },
- "sink": {
- "type": "BinarySink",
- "storeSettings": {
- "type": "AzureBlobStorageWriteSettings"
- }
- },
- "enableStaging": false
- },
- "inputs": [
- {
- "referenceName": "InputDataset",
- "type": "DatasetReference"
- }
- ],
- "outputs": [
- {
- "referenceName": "OutputDataset",
- "type": "DatasetReference"
- }
- ]
- }
- ],
- "annotations": []
- }
- }
- ```
+ ```json
+ {
+ "name": "Adfv2QuickStartPipeline",
+ "properties": {
+ "activities": [
+ {
+ "name": "CopyFromBlobToBlob",
+ "type": "Copy",
+ "dependsOn": [],
+ "policy": {
+ "timeout": "7.00:00:00",
+ "retry": 0,
+ "retryIntervalInSeconds": 30,
+ "secureOutput": false,
+ "secureInput": false
+ },
+ "userProperties": [],
+ "typeProperties": {
+ "source": {
+ "type": "BinarySource",
+ "storeSettings": {
+ "type": "AzureBlobStorageReadSettings",
+ "recursive": true
+ }
+ },
+ "sink": {
+ "type": "BinarySink",
+ "storeSettings": {
+ "type": "AzureBlobStorageWriteSettings"
+ }
+ },
+ "enableStaging": false
+ },
+ "inputs": [
+ {
+ "referenceName": "InputDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "OutputDataset",
+ "type": "DatasetReference"
+ }
+ ]
+ }
+ ],
+ "annotations": []
+ }
+ }
+ ```
1. Create a pipeline named `Adfv2QuickStartPipeline` by using the [az datafactory pipeline create](/cli/azure/datafactory/pipeline#az_datafactory_pipeline_create) command:
databox Data Box Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-set-up.md
Previously updated : 07/10/2020 Last updated : 08/23/2021 ms.localizationpriority: high
After you have received the device, you need to cable and connect to your device
Perform the following steps to set up your device using the local web UI and the portal UI. 1. Configure the Ethernet adapter on the laptop you are using to connect to the device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
-2. Connect to MGMT port of your device and access its local web UI at https\://192.168.100.10. This may take up to 5 minutes after you turned on the device.
-3. Click **Details** and then click **Go on to the webpage**.
+1. Connect to MGMT port of your device and access its local web UI at https\://192.168.100.10. This may take up to 5 minutes after you turned on the device.
+1. Click **Details** and then click **Go on to the webpage**.
![Connect to local web UI](media/data-box-deploy-set-up/data-box-connect-local-web-ui.png)
-4. You see a **Sign in** page for the local web UI. Ensure that the device serial number matches across both the portal UI and the local web UI. The device is locked at this point.
-5. Sign into the [Azure portal](https://portal.azure.com).
-6. Download the device credentials from portal. Go to **General > Device details**. Copy the **Device password**. The device password is tied to a specific order in the portal.
+1. You see a **Sign in** page for the local web UI. Ensure that the device serial number matches across both the portal UI and the local web UI. The device is locked at this point.
- ![Get device credentials](media/data-box-deploy-set-up/data-box-device-credentials.png)
+1. [!INCLUDE [data-box-get-device-password](../../includes/data-box-get-device-password.md)]
-
-7. Provide the device password that you got from the Azure portal in the previous step to sign into the local web UI of the device. Click **Sign in**.
-8. On the **Dashboard**, ensure that the network interfaces are configured.
+1. Provide the device password that you got from the Azure portal in the previous step to sign into the local web UI of the device. Click **Sign in**.
+1. On the **Dashboard**, ensure that the network interfaces are configured.
- If DHCP is enabled in your environment, network interfaces are automatically configured. - If DHCP is not enabled, go to **Set network interfaces**, and assign static IPs if needed.
After the device setup is complete, you can connect to the device shares and cop
## Connect your device 1. To get the device password, go to **General > Device details** in the [Azure portal](https://portal.azure.com).
-2. Assign a static IP address of 192.168.100.5 and subnet 255.255.255.0 to the Ethernet adapter on the computer you are using to connect to Data Box. Access the local web UI of the device at `https://192.168.100.10`. The connection could take up to 5 minutes after you turned on the device.
-3. Sign in using the password from the Azure portal. You see an error indicating a problem with the websiteΓÇÖs security certificate. Follow the browser-specific instructions to proceed to the web page.
-4. By default, the network settings for the 10 Gbps data interface (or 1 Gbps) are configured as DHCP. If needed, you can configure this interface as static and provide an IP address.
+1. Assign a static IP address of 192.168.100.5 and subnet 255.255.255.0 to the Ethernet adapter on the computer you are using to connect to Data Box. Access the local web UI of the device at `https://192.168.100.10`. The connection could take up to 5 minutes after you turned on the device.
+1. Sign in using the password from the Azure portal. You see an error indicating a problem with the websiteΓÇÖs security certificate. Follow the browser-specific instructions to proceed to the web page.
+1. By default, the network settings for the 10 Gbps data interface (or 1 Gbps) are configured as DHCP. If needed, you can configure this interface as static and provide an IP address.
::: zone-end
databox Data Box Local Web Ui Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-local-web-ui-admin.md
Previously updated : 12/18/2020 Last updated : 08/25/2021
To restart your Data Box, perform the following steps.
The device shuts down and then restarts.
+## Get share credentials
+
+If you need to find out the username and password to use to connect to a share on your device, you can find the share credentials in **Connect and copy** in the local web UI.
+
+When you order your device, you can choose to use default system-generated passwords for the shares on your device or your own passwords. Either way, the share passwords are set at the factory and can't be changed.
+
+To get the credentials for a share:
+
+1. In the local web UI, go to **Connect and copy**. Select **SMB** to get access credentials for the shares associated with your storage account.
+
+ ![Screenshot showing the Connect And Copy page in the local Web UI for a Data Box. The Connect And Copy menu item and the SMB option are highlighted.](media/data-box-local-web-ui-admin/get-share-credentials-01.png)
+
+1. In the **Access share and copy data** dialog box, use the copy icon to copy the **Username** and **Password** corresponding to the share. To close the dialog box, select **OK**.
+
+ ![Screenshot showing the Access Share And Copy Data dialog box in the local Web UI for an SMB share on the Data Box. The Copy icon for the Storage Account and Password options, and the OK button, are highlighted.](media/data-box-local-web-ui-admin/get-share-credentials-02.png)
+
+> [!NOTE]
+> After several failed share connection attempts using an incorrect password, the user account will be locked out of the share. The account lock will clear after a few minutes, and you can connect to the shares again.
+> - Data Box 4.1 and later: The account is locked for 15 minutes after 5 failed login attempts.
+> - Data Box 4.0 and earlier: The account is locked for 30 minutes after 3 failed login attempts.
+ ## Download BOM or manifest files The BOM or the manifest files contain the list of the files that are copied to the Data Box or Data Box Heavy. These files are generated for an import order when you prepare the device to ship.
databox Data Box Portal Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-portal-admin.md
Previously updated : 12/18/2020 Last updated : 08/23/2021
If you're using self-managed shipping, after the copy is complete and before you
|Ready to receive at Azure datacenter |The device is ready to be received at the Azure datacenter. | |Received |The device has been received at the Azure datacenter. |
+## Get device password
+When you order your device, you can choose to use the default system-generated device password or your own password. Either way, the device password is set at the factory and can't be changed.
+You can find out the device password by viewing your order in the Azure portal.
++
+> [!NOTE]
+> After several failed login attempts using an incorrect password, your admin account will be locked out of the device. The account lock will clear after a few minutes, and you can connect again.
+> - Data Box 4.1 and later: The account is locked for 15 minutes after 5 failed login attempts.
+> - Data Box 4.0 and earlier: The account is locked for 30 minutes after 3 failed login attempts.
## Next steps
defender-for-iot Concept Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/concept-agent-based-security-alerts.md
Title: Agent based security alerts
+ Title: Micro agent security alerts (Preview)
description: Learn about security alerts and recommended remediation using Defender for IoT device's features and service. Previously updated : 08/25/2021 Last updated : 08/30/2021
-# Defender for IoT devices security alerts
+# Micro agent security alerts (Preview)
Defender for IoT continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to malicious activity. In addition, you can create custom alerts based on your knowledge of expected device behavior.
An alert acts as an indicator of potential compromise, and should be investigate
In this article, you will find a list of built-in alerts, which can be triggered on your IoT devices. In addition to built-in alerts, Defender for IoT allows you to define custom alerts based on expected IoT Hub and/or device behavior.+ For more information, see [customizable alerts](concept-customizable-security-alerts.md).
-## Agent based security alerts
+## Security alerts
| Name | Severity | Data Source | Description | Suggested remediation steps | |--|--|--|--|--|
For more information, see [customizable alerts](concept-customizable-security-al
## Next steps - Defender for IoT service [Overview](overview.md)-- Learn how to [Access your security data](how-to-security-data-access.md)-- Learn more about [Investigating a device](how-to-investigate-device.md)
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-azure-digital-twins-explorer.md
description: Understand the capabilities and purpose of Azure Digital Twins Explorer Previously updated : 6/1/2021 Last updated : 8/24/2021
>[!NOTE] >This tool is currently in **public preview**.
-Here is a view of the explorer window, showing models and twins that have been populated for a sample graph:
+Here's a view of the explorer window, showing models and twins that have been populated for a sample graph:
:::image type="content" source="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png" alt-text="Screenshot of Azure Digital Twins Explorer showing sample models and twins." lightbox="media/concepts-azure-digital-twins-explorer/azure-digital-twins-explorer-demo.png":::
-The visual interface is a great tool for exploring and understanding the shape of your graph and model set, as well as making pointed, ad hoc changes to individual twins and relationships.
+The visual interface is a great tool for exploring and understanding the shape of your graph and model set. It also allows you to make pointed, on the spot changes to individual twins and relationships.
This article contains more information about the Azure Digital Twins Explorer, including its use cases and an overview of its features. For detailed steps on using each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
Azure Digital Twins Explorer is a visual tool designed for users who want to exp
Developers may find this tool especially useful in the following scenarios: * **Exploration**: Use the explorer to learn about Azure Digital Twins and the way it represents your real-world environment. Import sample models and graphs that you can view and edit to familiarize yourself with the service. For guided steps to get started using Azure Digital Twins Explorer, see [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-* **Development**: Use the explorer to view and validate your twin graph, as well as investigate specific properties of models, twins, and relationships. Make ad hoc modifications to your graph and its data. For detailed instructions on how to use each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+* **Development**: Use the explorer to view and validate your twin graph. You can also use it to investigate specific properties of models, twins, and relationships. Make on the spot modifications to your graph and its data. For detailed instructions on how to use each feature, see [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
The explorer's main purpose is to help you visualize and understand your graph, and update your graph as needed. For large-scale solutions and for work that should be repeated or automated, consider using the [APIs and SDKs](./concepts-apis-sdks.md) to interact with your instance through code instead.
To view instructions for contributing to this documentation, visit the [Microsof
Azure Digital Twins Explorer is available for use with all instances of Azure Digital Twins in all [supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
-During public preview, however, data may be sent for processing through different regions than the region where the instance is hosted. To avoid this in situations where data sovereignty is a concern, you can download the [open source code](#how-to-contribute) to create a locally-hosted version of the explorer on your own machine.
+During public preview, however, data may be sent for processing through different regions than the region where the instance is hosted. To avoid data being routed in this way in situations where data sovereignty is a concern, you can download the [open source code](#how-to-contribute) to create a locally hosted version of the explorer on your own machine.
### Billing
digital-twins Concepts Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-cli.md
description: Understand the Azure Digital Twins CLI command set. Previously updated : 04/30/2021 Last updated : 8/25/2021
# Azure Digital Twins CLI command set
-In addition to managing your Azure Digital Twins instance in the Azure portal, Azure Digital Twins has a command set for the [Azure CLI](/cli/azure/what-is-azure-cli) that you can use to perform most major actions with the service, including:
+Apart from managing your Azure Digital Twins instance in the Azure portal, Azure Digital Twins also has a command set for the [Azure CLI](/cli/azure/what-is-azure-cli) that you can use to do most major actions with the service, including:
* Managing an Azure Digital Twins instance * Managing models * Managing digital twins
The command set is called **az dt**, and is part of the [Azure IoT extension for
## Uses (deploy and validate)
-In addition to generally managing your instance, the CLI is also a useful tool for deployment and validation.
+Apart from generally managing your instance, the CLI is also a useful tool for deployment and validation.
* The control plane commands can be used to make the deployment of a new instance repeatable or automated. * The data plane commands can be used to quickly check values in your instance, and verify that operations completed as expected.
The Azure Digital Twins commands are part of the [Azure IoT extension for Azure
### CLI version requirements
-If you're using the Azure CLI with PowerShell, the extension package requires that your Azure CLI version be **2.3.1** or above.
+If you're using the Azure CLI with PowerShell, your Azure CLI version should be **2.3.1** or above as a requirement of the extension package.
You can check the version of your Azure CLI with this CLI command: ```azurecli
For instructions on how to install or update the Azure CLI to a newer version, s
The Azure CLI will automatically prompt you to install the extension on the first use of a command that requires it.
-Alternatively, you can use the following command to install the extension yourself at any time (or update it if it turns out that you already have an older version). The command can be run in either the [Azure Cloud Shell](../cloud-shell/overview.md) or a [local Azure CLI](/cli/azure/install-azure-cli).
+Otherwise, you can use the following command to install the extension yourself at any time (or update it if it turns out that you already have an older version). The command can be run in either the [Azure Cloud Shell](../cloud-shell/overview.md) or a [local Azure CLI](/cli/azure/install-azure-cli).
```azurecli-interactive az extension add --upgrade --name azure-iot
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
description: Understand how Azure Digital Twins uses custom models to describe entities in your environment. Previously updated : 6/1/2021 Last updated : 8/25/2021
A key characteristic of Azure Digital Twins is the ability to define your own vocabulary and build your twin graph in the self-defined terms of your business. This capability is provided through user-provided **models**. You can think of models as the nouns in a description of your world.
-A model is similar to a **class** in an object-oriented programming language, defining a data shape for one particular concept in your real work environment. Models have names (such as *Room* or *TemperatureSensor*), and contain elements such as properties, telemetry/events, and commands that describe what this type of entity in your environment can do. Later, you will use these models to create [digital twins](concepts-twins-graph.md) that represent specific entities that meet this type description.
+A model is similar to a **class** in an object-oriented programming language, defining a data shape for one particular concept in your real work environment. Models have names (such as *Room* or *TemperatureSensor*), and contain elements such as properties, telemetry/events, and commands that describe what this type of entity in your environment can do. Later, you'll use these models to create [digital twins](concepts-twins-graph.md) that represent specific entities that meet this type description.
Azure Digital Twins models are represented in the JSON-LD-based **Digital Twin Definition Language (DTDL)**.
Models for Azure Digital Twins are defined using the Digital Twins Definition La
You can view the full language specs for DTDL in GitHub: [Digital Twins Definition Language (DTDL) - Version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
-DTDL is based on JSON-LD and is programming-language independent. DTDL is not exclusive to Azure Digital Twins, but is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md). Azure Digital Twins uses DTDL **version 2** (use of DTDL version 1 with Azure Digital Twins has now been deprecated).
+DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins, but is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md). Azure Digital Twins uses DTDL **version 2** (use of DTDL version 1 with Azure Digital Twins has now been deprecated).
The rest of this article summarizes how the language is used in Azure Digital Twins. ### Azure Digital Twins DTDL implementation specifics
-Not all services that use DTDL implement the exact same features of DTDL. For example, IoT Plug and Play does not use the DTDL features that are for graphs, while Azure Digital Twins does not currently implement DTDL commands.
+Not all services that use DTDL implement the exact same features of DTDL. For example, IoT Plug and Play doesn't use the DTDL features that are for graphs, while Azure Digital Twins doesn't currently implement DTDL commands.
For a DTDL model to be compatible with Azure Digital Twins, it must meet these requirements:
-* All top-level DTDL elements in a model must be of type *interface*. This is because Azure Digital Twins model APIs can receive JSON objects that represent either an interface or an array of interfaces. As a result, no other DTDL element types are allowed at the top level.
+* All top-level DTDL elements in a model must be of type *interface*. The reason for this requirement is that Azure Digital Twins model APIs can receive JSON objects that represent either an interface or an array of interfaces. As a result, no other DTDL element types are allowed at the top level.
* DTDL for Azure Digital Twins must not define any *commands*.
-* Azure Digital Twins only allows a single level of component nesting. This means that an interface that's being used as a component can't have any components itself.
+* Azure Digital Twins only allows a single level of component nesting, meaning that an interface that's being used as a component can't have any components itself.
* Interfaces can't be defined inline within other DTDL interfaces; they must be defined as separate top-level entities with their own IDs. Then, when another interface wants to include that interface as a component or through inheritance, it can reference its ID.
-Azure Digital Twins also does not observe the `writable` attribute on properties or relationships. Although this can be set as per DTDL specifications, the value isn't used by Azure Digital Twins. Instead, these are always treated as writable by external clients that have general write permissions to the Azure Digital Twins service.
+Azure Digital Twins also doesn't observe the `writable` attribute on properties or relationships. Although this attribute can be set as per DTDL specifications, the value isn't used by Azure Digital Twins. Instead, these attributes are always treated as writable by external clients that have general write permissions to the Azure Digital Twins service.
## Model overview ### Elements of a model
-Within a model definition, the top-level code item is an **interface**. This encapsulates the entire model, and the rest of the model is defined within the interface.
+Within a model definition, the top-level code item is an **interface**. This type encapsulates the entire model, and the rest of the model is defined within the interface.
A DTDL model interface may contain zero, one, or many of each of the following fields: * **Property** - Properties are data fields that represent the state of an entity (like the properties in many object-oriented programming languages). Properties have backing storage and can be read at any time. For more information, see [Properties and telemetry](#properties-and-telemetry) below.
-* **Telemetry** - Telemetry fields represent measurements or events, and are often used to describe device sensor readings. Unlike properties, telemetry is not stored on a digital twin; it is a series of time-bound data events that need to be handled as they occur. For more information, see [Properties and telemetry](#properties-and-telemetry) below.
-* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), etc. Relationships allow the solution to provide a graph of interrelated entities. Relationships can also have properties of their own. For more information, see [Relationships](#relationships) below.
-* **Component** - Components allow you to build your model interface as an assembly of other interfaces, if you want. An example of a component is a *frontCamera* interface (and another component interface *backCamera*) that are used in defining a model for a *phone*. You must first define an interface for *frontCamera* as though it were its own model, and then you can reference it when defining *Phone*.
+* **Telemetry** - Telemetry fields represent measurements or events, and are often used to describe device sensor readings. Unlike properties, telemetry isn't stored on a digital twin; it's a series of time-bound data events that need to be handled as they occur. For more information, see [Properties and telemetry](#properties-and-telemetry) below.
+* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), and so on. Relationships allow the solution to provide a graph of interrelated entities. Relationships can also have properties of their own. For more information, see [Relationships](#relationships) below.
+* **Component** - Components allow you to build your model interface as an assembly of other interfaces, if you want. An example of a component is a *frontCamera* interface (and another component interface *backCamera*) that are used in defining a model for a *phone*. First define an interface for *frontCamera* as though it were its own model, and then reference it when defining *Phone*.
Use a component to describe something that is an integral part of your solution but doesn't need a separate identity, and doesn't need to be created, deleted, or rearranged in the twin graph independently. If you want entities to have independent existences in the twin graph, represent them as separate digital twins of different models, connected by **relationships**.
A DTDL model interface may contain zero, one, or many of each of the following f
### Model code
-Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension .json. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
+Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension .json. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There's also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
The fields of the model are:
The fields of the model are:
| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. | | `@type` | Identifies the kind of information being described. For an interface, the type is *Interface*. | | `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
-| `displayName` | [optional] Allows you to give the model a friendly name if desired. |
+| `displayName` | [optional] Gives you the option to define a friendly name for the model. |
| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (**property**, **telemetry**, **command**, **relationship**, or **component**) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a **property**). | #### Example model
This model describes a Home, with one **property** for an ID. The Home model als
This section goes into more detail about **properties** and **telemetry** in DTDL models.
-For a comprehensive list of the fields that may appear as part of a property, please see [Property in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). For a comprehensive list of the fields that may appear as part of telemetry, please see [Telemetry in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#telemetry).
+For a comprehensive list of the fields that may appear as part of a property, see [Property in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). For a comprehensive list of the fields that may appear as part of telemetry, see [Telemetry in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#telemetry).
### Difference between properties and telemetry
-Here's some additional guidance on conceptually distinguishing between DTDL **property** and **telemetry** in Azure Digital Twins.
-* **Properties** are expected to have backing storage. This means that you can read a property at any time and retrieve its value. If the property is writable, you can also store a value in the property.
-* **Telemetry** is more like a stream of events; it's a set of data messages that have short lifespans. If you don't set up listening for the event and actions to take when it happens, there is no trace of the event at a later time. You can't come back to it and read it later.
+Here's some guidance on conceptually distinguishing between DTDL **property** and **telemetry** in Azure Digital Twins.
+* **Properties** are expected to have backing storage, which means that you can read a property at any time and retrieve its value. If the property is writable, you can also store a value in the property.
+* **Telemetry** is more like a stream of events; it's a set of data messages that have short lifespans. If you don't set up listening for the event and actions to take when it happens, there's no trace of the event at a later time. You can't come back to it and read it later.
- In C# terms, telemetry is like a C# event. - In IoT terms, telemetry is typically a single measurement sent by a device.
-**Telemetry** is often used with IoT devices, because many devices are not capable of, or interested in, storing the measurement values they generate. They just send them out as a stream of "telemetry" events. In this case, you can't inquire on the device at any time for the latest value of the telemetry field. Instead, you'll need to listen to the messages from the device and take actions as the messages arrive.
+**Telemetry** is often used with IoT devices, because many devices either can't, or aren't interested in, storing the measurement values they generate. Instead, they send them out as a stream of "telemetry" events. In this case, you can't query the device at any time for the latest value of the telemetry field. You'll need to listen to the messages from the device and take actions as the messages arrive.
-As a result, when designing a model in Azure Digital Twins, you will probably use **properties** in most cases to model your twins. This allows you to have the backing storage and the ability to read and query the data fields.
+As a result, when designing a model in Azure Digital Twins, you'll probably use **properties** in most cases to model your twins. Doing so allows you to have the backing storage and the ability to read and query the data fields.
-Telemetry and properties often work together to handle data ingress from devices. As all ingress to Azure Digital Twins is via [APIs](concepts-apis-sdks.md), you will typically use your ingress function to read telemetry or property events from devices, and set a property in Azure Digital Twins in response.
+Telemetry and properties often work together to handle data ingress from devices. As all ingress to Azure Digital Twins is via [APIs](concepts-apis-sdks.md), you'll typically use your ingress function to read telemetry or property events from devices, and set a property in Azure Digital Twins in response.
You can also publish a telemetry event from the Azure Digital Twins API. As with other telemetry, that is a short-lived event that requires a listener to handle.
They can also be [semantic types](#semantic-type-example), which allow you to an
### Basic property and telemetry examples
-Here is a basic example of a **property** on a DTDL model. This example shows the ID property of a Home.
+Here's a basic example of a **property** on a DTDL model. This example shows the ID property of a Home.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/basic-home-example/IHome.json" highlight="7-11":::
-Here is a basic example of a **telemetry** field on a DTDL model. This example shows Temperature telemetry on a Sensor.
+Here's a basic example of a **telemetry** field on a DTDL model. This example shows Temperature telemetry on a Sensor.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/basic-home-example/ISensor.json" highlight="7-11":::
The following example shows a Sensor model with a semantic-type telemetry for Te
This section goes into more detail about **relationships** in DTDL models.
-For a comprehensive list of the fields that may appear as part of a relationship, please see [Relationship in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#relationship).
+For a comprehensive list of the fields that may appear as part of a relationship, see [Relationship in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#relationship).
### Basic relationship example
-Here is a basic example of a relationship on a DTDL model. This example shows a relationship on a Home model that allows it to connect to a Floor model.
+Here's a basic example of a relationship on a DTDL model. This example shows a relationship on a Home model that allows it to connect to a Floor model.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/basic-home-example/IHome.json" highlight="12-18":::
Relationships can be defined with or without a **target**. A target specifies wh
Sometimes, you might want to define a relationship without a specific target, so that the relationship can connect to many different types of twins.
-Here is an example of a relationship on a DTDL model that does not have a target. In this example, the relationship is for defining what sensors a Room might have, and the relationship can connect to any type.
+Here's an example of a relationship on a DTDL model that doesn't have a target. In this example, the relationship is for defining what sensors a Room might have, and the relationship can connect to any type.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/IRoom.json" range="2-27" highlight="20-25":::
The following example shows another version of the Home model, where the `rel_ha
This section goes into more detail about **components** in DTDL models.
-For a comprehensive list of the fields that may appear as part of a component, please see [Component in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#component).
+For a comprehensive list of the fields that may appear as part of a component, see [Component in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#component).
### Basic component example
-Here is a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat model as a component.
+Here's a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat model as a component.
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/IRoom.json" highlight="15-19, 28-41":::
If other models in this solution should also contain a thermostat, they can refe
## Model inheritance
-Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model Room, and specialized variants ConferenceRoom and Gym. To express specialization, **DTDL supports inheritance**. Interfaces can inherit from one or more other interfaces. This is done by adding an `extends` field to the model.
+Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model Room, and specialized variants ConferenceRoom and Gym. To express specialization, **DTDL supports inheritance**. Interfaces can inherit from one or more other interfaces. You can do so by adding an `extends` field to the model.
-The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired). A single parent can serve as the base model for multiple extending interfaces.
+The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models). A single parent can serve as the base model for multiple extending interfaces.
The following example re-imagines the Home model from the earlier DTDL example as a subtype of a larger "core" model. The parent model (Core) is defined first, and then the child model (Home) builds on it by using `extends`.
The following example re-imagines the Home model from the earlier DTDL example a
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/IHome.json" range="1-8" highlight="6":::
-In this case, Core contributes an ID and name to Home. Other models can also extend the Core model to get these properties as well. Here is a Room model extending the same parent interface:
+In this case, Core contributes an ID and name to Home. Other models can also extend the Core model to get these properties as well. Here's a Room model extending the same parent interface:
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/IRoom.json" range="2-9" highlight="6"::: Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain.
-The extending interface cannot change any of the definitions of the parent interfaces; it can only add to them. It also cannot redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface cannot contain a declaration of *mass*, even if it's also a `double`.
+The extending interface can't change any of the definitions of the parent interfaces; it can only add to them. It also can't redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface can't contain a declaration of *mass*, even if it's also a `double`.
## Modeling best practices
While designing models to reflect the entities in your environment, it can be us
## Modeling tools
-There are several samples available to make it even easier to deal with models and ontologies. They are located in this repository: [Tools for Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-tools).
+There are several samples available to make it even easier to deal with models and ontologies. They're located in this repository: [Tools for Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-tools).
This section describes the current set of samples in more detail. ### Model uploader
-Once you are finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. This is done using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [Manage DTDL models](how-to-manage-model.md#upload-models).
+Once you're finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. You can do so by using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [Manage DTDL models](how-to-manage-model.md#upload-models).
However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use the [Azure Digital Twins Model Uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels) to upload many models at once. Follow the instructions provided with the sample to configure and use this project to upload models into your own instance.
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-twins-graph.md
description: Understand the concept of a digital twin, and how their relationships make a graph. Previously updated : 6/1/2021 Last updated : 8/26/2021
In an Azure Digital Twins solution, the entities in your environment are represe
Before you can create a digital twin in your Azure Digital Twins instance, you need to have a *model* uploaded to the service. A model describes the set of properties, telemetry messages, and relationships that a particular twin can have, among other things. For the types of information that are defined in a model, see [Custom models](concepts-models.md).
-After creating and uploading a model, your client app can create an instance of the type; this is a digital twin. For example, after creating a model of Floor, you may create one or several digital twins that use this type (like a Floor-type twin called GroundFloor, another called Floor2, etc.).
+After creating and uploading a model, your client app can create an instance of the type. This instance is a digital twin. For example, after creating a model of Floor, you may create one or several digital twins that use this type (like a Floor-type twin called GroundFloor, another called Floor2, and so on).
[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
The result of this process is a set of nodes (the digital twins) connected via e
## Create with the APIs
-This section shows what it looks like to create digital twins and relationships from a client application. It contains .NET code examples that utilize the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), to provide additional context on what goes on inside each of these concepts.
+This section shows what it looks like to create digital twins and relationships from a client application. It contains .NET code examples that use the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), to provide more context on what goes on inside each of these concepts.
### Create digital twins Below is a snippet of client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to instantiate a twin of type Room with a `twinId` that's defined during the instantiation.
-You can initialize the properties of a twin when it is created, or set them later. To create a twin with initialized properties, create a JSON document that provides the necessary initialization values.
+You can initialize the properties of a twin when it's created, or set them later. To create a twin with initialized properties, create a JSON document that provides the necessary initialization values.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="CreateTwin_noHelper":::
You can also use a helper class called `BasicDigitalTwin` to store property fiel
### Create relationships
-Here is some example client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to build a relationship from one digital twin (the "source" twin) to another digital twin (the "target" twin).
+Here's some example client code that uses the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins) to build a relationship from one digital twin (the "source" twin) to another digital twin (the "target" twin).
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_short"::: ## JSON representations of graph elements
-Digital twin data and relationship data are both stored in JSON format. This means that when you [query the twin graph](how-to-query-graph.md) in your Azure Digital Twins instance, the result will be a JSON representation of digital twins and relationships you have created.
+Digital twin data and relationship data are both stored in JSON format, which means that when you [query the twin graph](how-to-query-graph.md) in your Azure Digital Twins instance, the result will be a JSON representation of digital twins and relationships you've created.
### Digital twin JSON format
When represented as a JSON object, a digital twin will display the following fie
| `<component-name>.<property-name>` | The value of the component's property in JSON (`string`, number type, or object) | | `<component-name>.$metadata` | The metadata information for the component, similar to the root-level `$metadata` |
-Here is an example of a digital twin formatted as a JSON object:
+Here's an example of a digital twin formatted as a JSON object:
```json {
When represented as a JSON object, a relationship from a digital twin will displ
| `$relationshipName` | The name of the relationship | | `<property-name>` | [Optional] The value of a property of this relationship, in JSON (`string`, number type, or object) |
-Here is an example of a relationship formatted as a JSON object:
+Here's an example of a relationship formatted as a JSON object:
```json {
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
description: See how to write authentication code in a client application Previously updated : 10/7/2020 Last updated : 8/26/2021
After you [set up an Azure Digital Twins instance and authentication](how-to-set
Azure Digital Twins performs authentication using [Azure AD Security Tokens based on OAUTH 2.0](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). To authenticate your SDK, you'll need to get a bearer token with the right permissions to Azure Digital Twins, and pass it along with your API calls.
-This article describes how to obtain credentials using the `Azure.Identity` client library. While this article shows code examples in C#, such as what you'd write for the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), you can use a version of `Azure.Identity` regardless of what SDK you're using (for more on the SDKs available for Azure Digital Twins, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md)).
+This article describes how to obtain credentials using the `Azure.Identity` client library. While this article shows code examples in C#, such as what you'd write for the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), you can use a version of `Azure.Identity` regardless of what SDK you're using (for more on the SDKs available for Azure Digital Twins, see [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md).
## Prerequisites
Three common credential-obtaining methods in `Azure.Identity` are:
* [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) provides a default `TokenCredential` authentication flow for applications that will be deployed to Azure, and is **the recommended choice for local development**. It also can be enabled to try the other two methods recommended in this article; it wraps `ManagedIdentityCredential` and can access `InteractiveBrowserCredential` with a configuration variable. * [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential?view=azure-dotnet&preserve-view=true) works great in cases where you need [managed identities (MSI)](../active-directory/managed-identities-azure-resources/overview.md), and is a good candidate for working with Azure Functions and deploying to Azure services.
-* [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) is intended for interactive applications, and can be used to create an authenticated SDK client
+* [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) is intended for interactive applications, and can be used to create an authenticated SDK client.
The rest of this article shows how to use these with the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-cli.md
description: See how to create an Azure AD app registration, as an authentication option for client apps, using the CLI. Previously updated : 5/13/2021 Last updated : 8/27/2021
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
-When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as a custom client app or a sample like [Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
-This is not required for all authentication scenarios. However, if you are using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure CLI](/cli/azure/what-is-azure-cli). It also covers how to [collect important values](#collect-important-values) that you'll need in order to use the app registration to authenticate.
+The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure CLI](/cli/azure/what-is-azure-cli). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
## Azure AD app registrations
Save the finished file.
### Upload to Cloud Shell
-Next, upload the manifest file you just created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration.
+Next, upload the manifest file you created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration.
To upload the file, go to the Cloud Shell window in your browser. Select the "Upload/Download files" icon and choose "Upload". :::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Screenshot of Azure Cloud Shell. The Upload icon is highlighted.":::
-Navigate to the **manifest.json** file on your machine and select "Open." This will upload the file to the root of your Cloud Shell storage.
+Navigate to the **manifest.json** file on your machine and select "Open." Doing so will upload the file to the root of your Cloud Shell storage.
## Create the registration
Run the following command to create the registration:
az ad app create --display-name <app-registration-name> --available-to-other-tenants false --reply-urls http://localhost --native-app --required-resource-accesses "@manifest.json" ```
-The output of the command is information about the app registration you have created.
+The output of the command is information about the app registration you've created.
## Verify success
You can also verify the app registration was successfully created by using the A
## Collect important values
-Next, collect some important values about the app registration that you'll need in order to use the app registration to authenticate a client application. These values include:
+Next, collect some important values about the app registration that you'll need to use the app registration to authenticate a client application. These values include:
* **resource name** * **client ID** * **tenant ID**
To create a **client secret** for your app registration, you'll need your app re
az ad app credential reset --id <client-ID> --append ```
-You can also add optional parameters to this command to specify a credential description, end date, and other details. For more information about the command and its additional parameters, see [az ad app credential reset documentation](/cli/azure/ad/app/credential?view=azure-cli-latest&preserve-view=true#az_ad_app_credential_reset).
+You can also add optional parameters to this command to specify a credential description, end date, and other details. For more information about the command and its parameters, see [az ad app credential reset documentation](/cli/azure/ad/app/credential?view=azure-cli-latest&preserve-view=true#az_ad_app_credential_reset).
The output of this command is information about the client secret that you've created. Copy the value for `password` to use when you need the client secret for authentication.
The output of this command is information about the client secret that you've cr
## Other possible steps for your organization
-It's possible that your organization requires additional actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
+It's possible that your organization requires more actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
-Here are some common potential activities that an Owner or administrator on the subscription may need to perform.
+Here are some common potential activities that an Owner or administrator on the subscription may need to do.
* Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the Owner/administrator may need to grant additional delegated or application permissions. * Activate public client access by appending `--set publicClient=true` to a create or update command for the registration. * Set specific reply URLs for web and desktop access using the `--reply-urls` parameter. For more information on using this parameter with `az ad` commands, see the [az ad app documentation](/cli/azure/ad/app?view=azure-cli-latest&preserve-view=true).
digital-twins How To Create App Registration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-portal.md
description: See how to create an Azure AD app registration, as an authentication option for client apps, using the Azure portal. Previously updated : 10/13/2020 Last updated : 8/27/2021
[!INCLUDE [digital-twins-create-app-registration-selector.md](../../includes/digital-twins-create-app-registration-selector.md)]
-When working with an Azure Digital Twins instance, it is common to interact with that instance through client applications, such as the custom client app built in [Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins in order to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
+When working with an Azure Digital Twins instance, it's common to interact with that instance through client applications, such as the custom client app built in [Code a client app](tutorial-code.md). Those applications need to authenticate with Azure Digital Twins to interact with it, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **app registration**.
-This is not required for all authentication scenarios. However, if you are using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure portal](https://portal.azure.com). It also covers how to [collect important values](#collect-important-values) that you'll need in order to use the app registration to authenticate.
+The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up using the [Azure portal](https://portal.azure.com). It also covers how to [collect important values](#collect-important-values) that you'll need to use the app registration to authenticate.
## Azure AD app registrations
In the *Register an application* page that follows, fill in the requested values
* **Supported account types**: Select *Accounts in this organizational directory only (Default Directory only - Single tenant)* * **Redirect URI**: An *Azure AD application reply URL* for the Azure AD application. Add a *Public client/native (mobile & desktop)* URI for `http://localhost`.
-When you are finished, select the *Register* button.
+When you're finished, select the *Register* button.
:::image type="content" source="media/how-to-create-app-registration/register-an-application.png" alt-text="Screenshot of the 'Register an application' page in the Azure portal with the described values filled in.":::
When the registration is finished setting up, the portal will redirect you to it
## Collect important values
-Next, collect some important values about the app registration that you'll need in order to use the app registration to authenticate a client application. These values include:
+Next, collect some important values about the app registration that you'll need to use the app registration to authenticate a client application. These values include:
* **resource name** * **client ID** * **tenant ID**
From the portal page for your app registration, select *API permissions* from th
:::image type="content" source="media/how-to-create-app-registration/add-permission.png" alt-text="Screenshot of the app registration in the Azure portal, highlighting the 'API permissions' menu option and 'Add a permission' button.":::
-In the *Request API permissions* page that follows, switch to the *APIs my organization uses* tab and search for *Azure digital twins*. Select _**Azure Digital Twins**_ from the search results to proceed with assigning permissions for the Azure Digital Twins APIs.
+In the *Request API permissions* page that follows, switch to the *APIs my organization uses* tab and search for *Azure digital twins*. Select _**Azure Digital Twins**_ from the search results to continue with assigning permissions for the Azure Digital Twins APIs.
:::image type="content" source="media/how-to-create-app-registration/request-api-permissions-1.png" alt-text="Screenshot of the 'Request API Permissions' page search result in the Azure portal showing Azure Digital Twins.":::
Select *Add permissions* when finished.
### Verify success
-On the *API permissions* page, verify that there is now an entry for Azure Digital Twins reflecting Read/Write permissions:
+On the *API permissions* page, verify that there's now an entry for Azure Digital Twins reflecting Read/Write permissions:
:::image type="content" source="media/how-to-create-app-registration/verify-api-permissions.png" alt-text="Screenshot of the API permissions for the Azure AD app registration in the Azure portal, showing 'Read/Write Access' for Azure Digital Twins."::: You can also verify the connection to Azure Digital Twins within the app registration's *manifest.json*, which was automatically updated with the Azure Digital Twins information when you added the API permissions.
-To do this, select **Manifest** from the menu to view the app registration's manifest code. Scroll to the bottom of the code window and look for the following fields and values under `requiredResourceAccess`:
+To do so, select **Manifest** from the menu to view the app registration's manifest code. Scroll to the bottom of the code window and look for the following fields and values under `requiredResourceAccess`:
* `"resourceAppId": "0b07f429-9f4b-4714-9392-cc5e8e80c8b0"` * `"resourceAccess"` > `"id": "4589bd03-58cb-4e6c-b17f-b580e39652f8"`
If these values are missing, retry the steps in the [section for adding the API
## Other possible steps for your organization
-It's possible that your organization requires additional actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
+It's possible that your organization requires more actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
-Here are some common potential activities that an Owner/administrator on the subscription may need to perform. These and other operations can be performed from the [Azure AD App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) page in the Azure portal.
+Here are some common potential activities that an Owner/administrator on the subscription may need to do. These and other operations can be performed from the [Azure AD App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) page in the Azure portal.
* Grant admin consent for the app registration. Your organization may have *Admin Consent Required* globally turned on in Azure AD for all app registrations within your subscription. If so, the Owner/administrator will need to select this button for your company on the app registration's *API permissions* page for the app registration to be valid: :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions.":::
For more information about app registration and its different setup options, see
In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs.
-Next, read about authentication mechanisms, including one that uses app registrations and others that do not:
+Next, read about authentication mechanisms, including one that uses app registrations and others that don't:
* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-opcua-data.md
description: Steps to get your Azure OPC UA data into Azure Digital Twins Previously updated : 5/20/2021 Last updated : 8/27/2021 # Optional fields. Don't forget to remove # if you need a field.
# Ingesting OPC UA data with Azure Digital Twins
-The [OPC Unified Architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-ua/) is a platform independent, service-oriented architecture for the manufacturing space. It is used to get telemetry data from devices.
+The [OPC Unified Architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-ua/) is a platform independent, service-oriented architecture for the manufacturing space. It's used to get telemetry data from devices.
-Getting OPC UA Server data to flow into Azure Digital Twins requires multiple components installed on different devices, as well as some custom code and settings that need to be configured.
+Getting OPC UA Server data to flow into Azure Digital Twins requires multiple components installed on different devices and some custom code and settings that need to be configured.
This article shows how to connect all these pieces together to get your OPC UA nodes into Azure Digital Twins. You can continue to build on this guidance for your own solutions.
Here are the components that will be included in this solution.
| Component | Description | | | | | OPC UA Server | OPC UA Server from [ProSys](https://www.prosysopc.com/products/opc-ua-simulation-server/) or [Kepware](https://www.kepware.com/en-us/products/#KEPServerEX) to simulate the OPC UA data. |
-| [Azure IoT Edge](../iot-edge/about-iot-edge.md) | IoT Edge is an IoT Hub service that gets installed on a local Linux gateway device. It is required for the OPC Publisher module to run and send data to IoT Hub. |
-| [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) | This is an IoT Edge module built by the Azure Industrial IoT team. This module connects to your OPC UA Server and sends the node data into Azure IoT Hub. |
+| [Azure IoT Edge](../iot-edge/about-iot-edge.md) | IoT Edge is an IoT Hub service that gets installed on a local Linux gateway device. It's required for the OPC Publisher module to run and send data to IoT Hub. |
+| [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) | This component is an IoT Edge module built by the Azure Industrial IoT team. This module connects to your OPC UA Server and sends the node data into Azure IoT Hub. |
| [Azure IoT Hub](../iot-hub/about-iot-hub.md) | OPC Publisher sends the OPC UA telemetry into Azure IoT Hub. From there, you can process the data through an Azure Function and into Azure Digital Twins. | | Azure Digital Twins | The platform that enables you to create a digital representation of real-world things, places, business processes, and people. | | [Azure function](../azure-functions/functions-overview.md) | A custom Azure function is used to process the telemetry flowing in Azure IoT Hub to the proper twins and properties in Azure Digital Twins. |
For more detailed information on installing each of these pieces, see the follow
### Set up OPC UA Server
-For this article, you do not need access to physical devices running a real OPC UA Server. Instead, you can install the free [Prosys OPC UA Simulation Server](https://www.prosysopc.com/products/opc-ua-simulation-server/) on a Windows VM to generate the OPC UA data. This section walks through this setup.
+For this article, you don't need access to physical devices running a real OPC UA Server. Instead, you can install the free [Prosys OPC UA Simulation Server](https://www.prosysopc.com/products/opc-ua-simulation-server/) on a Windows VM to generate the OPC UA data. This section walks through this setup.
If you already have a physical OPC UA device or another OPC UA simulation server you'd like to use, you can ahead to the next section, [Set up IoT Edge device](#set-up-iot-edge-device).
The Prosys Software requires a simple virtual resource. Using the [Azure portal]
:::image type="content" source="media/how-to-ingest-opcua-data/create-windows-virtual-machine-1.png" alt-text="Screenshot of the Azure portal, showing the Basics tab of Windows virtual machine setup." lightbox="media/how-to-ingest-opcua-data/create-windows-virtual-machine-1.png":::
-Your VM must be reachable over the internet. For simplicity in this walkthrough, you can open all ports and assign the VM a Public IP address. This is done in the **Networking** tab of virtual machine setup.
+Your VM must be reachable over the internet. For simplicity in this walkthrough, you can open all ports and assign the VM a Public IP address. You can do so in the **Networking** tab of virtual machine setup.
:::image type="content" source="media/how-to-ingest-opcua-data/create-windows-virtual-machine-2.png" alt-text="Screenshot of the Azure portal, showing the Networking tab of Windows virtual machine setup.":::
Next, copy the value of **Connection Address (UA TCP)**. Paste it somewhere safe
`opc.tcp://<ip-address>:53530/OPCUA/SimulationServer`
-You will use this updated value later in this article.
+You'll use this updated value later in this article.
Finally, view the simulation nodes provided by default with the server by selecting the **Objects** tab and expanding the Objects::FolderType and Simulation::FolderType folders. You'll see the simulation nodes, each with its own unique `NodeId` value.
First, [create an Azure IoT Hub instance](../iot-hub/iot-hub-create-through-port
:::image type="content" source="media/how-to-ingest-opcua-data/iot-hub.png" alt-text="Screenshot of the Azure portal showing properties of an IoT Hub.":::
-After you have created the Azure IoT Hub instance, select **IoT Edge** from the instance's left navigation menu, and select **Add an IoT Edge device**.
+After you've created the Azure IoT Hub instance, select **IoT Edge** from the instance's left navigation menu, and select **Add an IoT Edge device**.
:::image type="content" source="media/how-to-ingest-opcua-data/iot-edge-1.png" alt-text="Screenshot of adding an IoT Edge device in the Azure portal."::: Follow the prompts to create a new device.
-Once your device is created, copy either the **Primary Connection String** or **Secondary Connection String** value. You will need this later when you set up the edge device.
+Once your device is created, copy either the **Primary Connection String** or **Secondary Connection String** value. You'll need this value later when you set up the edge device.
:::image type="content" source="media/how-to-ingest-opcua-data/iot-edge-2.png" alt-text="Screenshot of the Azure portal showing IoT Edge device connection strings.":::
In this section, you set up IoT Edge and IoT Hub in preparation to create a gate
### Set up gateway device
-In order to get your OPC UA Server data into IoT Hub, you need a device that runs IoT Edge with the OPC Publisher module. OPC Publisher will then listen to OPC UA node updates and will publish the telemetry into IoT Hub in JSON format.
+To get your OPC UA Server data into IoT Hub, you need a device that runs IoT Edge with the OPC Publisher module. OPC Publisher will then listen to OPC UA node updates and will publish the telemetry into IoT Hub in JSON format.
#### Create Ubuntu Server virtual machine
Using the [Azure portal](https://portal.azure.com), create an Ubuntu Server virt
* **Availability options**: No infrastructure redundancy required * **Image**: Ubuntu Server 18.04 LTS - Gen1 * **Size**: Standard_B1ms - 1 vcpu, 2 GiB memory
- - The default size (Standard_b1s ΓÇô vcpu, 1GiB memory) is too slow for RDP. Updating it to the 2 GiB memory will provide a better RDP experience.
+ - The default size (Standard_b1s ΓÇô vcpu, 1GiB memory) is too slow for RDP. Updating it to the 2-GiB memory will provide a better RDP experience.
:::image type="content" source="media/how-to-ingest-opcua-data/ubuntu-virtual-machine.png" alt-text="Screenshot of the Azure portal showing Ubuntu virtual machine settings.":::
Follow the rest of the prompts to create the module.
After about 15 seconds, you can run the `iotedge list` command on your gateway device, which lists all the modules running on your IoT Edge device. You should see the OPCPublisher module up and running. Finally, go to the `/iiotedge` directory and create a *publishednodes.json* file. The IDs in the file need to match the `NodeId` values that you [gathered earlier from the OPC Server](#install-opc-ua-simulation-software). Your file should look like something like this:
Then, run the following command:
sudo iotedge logs OPCPublisher -f ```
-The command will result in the output of the OPC Publisher logs. If everything is configured and running correctly, you will see something like the following:
+The command will result in the output of the OPC Publisher logs. If everything is configured and running correctly, you'll see something like the following screenshot:
Data should now be flowing from an OPC UA Server into your IoT Hub.
The data flow in this section involves these steps:
### Create opcua-mapping.json file
-First, create your *opcua-mapping.json* file. Start with a blank JSON file and fill in entries that map `NodeId` values to `twinId` values and properties in Azure Digital Twins, according to the example and schema below. You will need to create a mapping entry for every `NodeId`.
+First, create your *opcua-mapping.json* file. Start with a blank JSON file and fill in entries that map `NodeId` values to `twinId` values and properties in Azure Digital Twins, according to the example and schema below. You'll need to create a mapping entry for every `NodeId`.
```JSON [
First, create your *opcua-mapping.json* file. Start with a blank JSON file and f
] ```
-Here is the schema for the entries:
+Here's the schema for the entries:
| Property | Description | Required | | | | | | NodeId | Value from the OPC UA node. For example: ns=3;i={value} | Γ£ö | | TwinId | TwinId ($dtId) of the twin you want to save the telemetry value for | Γ£ö | | Property | Name of the property on the twin to save the telemetry value | Γ£ö |
-| ModelId | The modelId to create the twin if the TwinId does not exist | |
+| ModelId | The modelId to create the twin if the TwinId doesn't exist | |
> [!TIP] > For a complete example of a *opcua-mapping.json* file, see the [OPC UA to Azure Digital Twins GitHub repo](https://github.com/Azure-Samples/opcua-to-azure-digital-twins).
Next, create a [shared access signature for the container](../storage/common/sto
In this section, you'll publish an Azure function that you downloaded in [Prerequisites](#prerequisites) that will process the OPC UA data and update Azure Digital Twins. 1. Navigate to the downloaded [OPC UA to Azure Digital Twins](https://github.com/Azure-Samples/opcua-to-azure-digital-twins) project on your local machine, and into the *Azure Functions/OPCUAFunctions* folder. Open the **OPCUAFunctions.sln** solution in Visual Studio.
-2. Publish the project to a function app in Azure. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+2. Publish the project to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
#### Configure the function app
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
description: See how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map. Previously updated : 1/19/2021 Last updated : 8/27/2021
This article walks through the steps required to use Azure Digital Twins data to update information displayed on an *indoor map* using [Azure Maps](../azure-maps/about-azure-maps.md). Azure Digital Twins stores a graph of your IoT device relationships and routes telemetry to different endpoints, making it the perfect service for updating informational overlays on maps.
-This how-to will cover:
+This guide will cover:
1. Configuring your Azure Digital Twins instance to send twin update events to a function in [Azure Functions](../azure-functions/functions-overview.md). 2. Creating a function to update an Azure Maps indoor maps feature stateset.
This how-to will cover:
### Prerequisites * Follow the Azure Digital Twins in [Connect an end-to-end solution](./tutorial-end-to-end.md).
- * You'll be extending this twin with an additional endpoint and route. You will also be adding another function to your function app from that tutorial.
+ * You'll be extending this twin with an additional endpoint and route. You'll also be adding another function to your function app from that tutorial.
* Follow the Azure Maps in [Use Azure Maps Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*.
- * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you will be displaying on a map.
- * You will need your feature *stateset ID* and Azure Maps *subscription key*.
+ * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you'll be displaying on a map.
+ * You'll need your feature *stateset ID* and Azure Maps *subscription key*.
### Topology
az functionapp config appsettings set --name <your-function-app-name> --resourc
To see live-updating temperature, follow the steps below:
-1. Begin sending simulated IoT data by running the **DeviceSimulator** project from the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section.
+1. Begin sending simulated IoT data by running the **DeviceSimulator** project from the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this process are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section.
2. Use [the Azure Maps Indoor module](../azure-maps/how-to-use-indoor-module.md) to render your indoor maps created in Azure Maps Creator. 1. Copy the HTML from the [Example: Use the Indoor Maps Module](../azure-maps/how-to-use-indoor-module.md#example-use-the-indoor-maps-module) section of the indoor maps in [Use the Azure Maps Indoor Maps module](../azure-maps/how-to-use-indoor-module.md) to a local file. 1. Replace the *subscription key*, *tilesetId*, and *statesetID* in the local HTML file with your values.
Both samples send temperature in a compatible range, so you should see the color
## Store your maps information in Azure Digital Twins
-Now that you have a hardcoded solution to updating your maps information, you can use the Azure Digital Twins graph to store all of the information necessary for updating your indoor map. This would include the stateset ID, maps subscription ID, and feature ID of each map and location respectively.
+Now that you have a hardcoded solution to updating your maps information, you can use the Azure Digital Twins graph to store all of the information necessary for updating your indoor map. This information would include the stateset ID, maps subscription ID, and feature ID of each map and location respectively.
A solution for this specific example would involve updating each top-level space to have a stateset ID and maps subscription ID attribute, and updating each room to have a feature ID. You would need to set these values once when initializing the twin graph, then query those values for each twin update event.
-Depending on the configuration of your topology, you will be able to store these three attributes at different levels correlating to the granularity of your map.
+Depending on the configuration of your topology, storing these three attributes at different levels correlating to the granularity of your map will be possible.
## Next steps
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
description: See how to create, edit, and delete a model within Azure Digital Twins. Previously updated : 8/13/2021 Last updated : 8/30/2021
Following this method, you can go on to define models for the hospital's wards,
Once models are created, you can upload them to the Azure Digital Twins instance.
-When you're ready to upload a model, you can use the following code snippet:
+When you're ready to upload a model, you can use the following code snippet for the [.NET SDK](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true):
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModel":::
-Observe that the `CreateModels` method accepts multiple files in one single transaction. Here's a sample to illustrate:
+You can also upload multiple models in a single transaction.
+
+If you're using the SDK, you can upload multiple model files with the `CreateModels` method like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModels_multi":::
-Model files can contain more than a single model. In this case, the models need to be placed in a JSON array. For example:
+If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/cli/azure/dt?view=azure-cli-latest&preserve-view=true), you can also upload multiple models by placing multiple model definitions in a single JSON file to be uploaded together. In this case, the models should placed in a JSON array within the file, like in the following example:
:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
description: Overview of what can be done with Azure Digital Twins. Previously updated : 8/19/2021 Last updated : 8/23/2021
**Azure Digital Twins** is a platform as a service (PaaS) offering that enables the creation of twin graphs based on digital models of entire environments. These environments could be buildings, factories, farms, energy networks, railways, stadiums, and moreΓÇöeven entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences.
-Leverage your domain expertise on top of Azure Digital Twins to build customized, connected solutions that:
+Take advantage of your domain expertise on top of Azure Digital Twins to build customized, connected solutions that:
* Model any environment, and bring digital twins to life in a scalable and secure manner * Connect assets such as IoT devices and existing business systems * Use a robust event system to build dynamic business logic and data processing
You can think of these model definitions as a specialized vocabulary to describe
[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
-Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins in terms of their state properties, telemetry events, commands, components, and relationships.
+Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships.
* Models define semantic **relationships** between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs. * You can also specialize twins using model inheritance. One model can inherit from another.
-DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). This helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
### Live execution environment
You can also drive Azure Digital Twins from other data sources, using REST APIs
### Output to ADX, TSI, storage, and analytics
-The data in your Azure Digital Twins model can be routed to downstream Azure services for additional analytics or storage. This is provided through **event routes**, which use [Event Hub](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your desired data flows.
+The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage. This functionality is provided through **event routes**, which use [Event Hub](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your data flows.
Some things you can do with event routes include: * Sending digital twin data to ADX for querying with the [Azure Digital Twins query plugin for Azure Data Explorer (ADX)](concepts-data-explorer-plugin.md)
Some things you can do with event routes include:
* Analyzing Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools * Integrating larger workflows with Logic AppsΓÇï
-This is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
+This option is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
## Azure Digital Twins in a solution context Azure Digital Twins is commonly used in combination with other Azure services as part of a larger IoT solution. A complete solution using Azure Digital Twins may contain the following parts:
-* The Azure Digital Twins service instance. This stores your twin models and your twin graph with its state, and orchestrates event processing.
+* The Azure Digital Twins service instance. This service stores your twin models and your twin graph with its state, and orchestrates event processing.
* One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities.
The following diagram shows where Azure Digital Twins lies in the context of a l
## Service limits
-You can read about the **service limits** of Azure Digital Twins in the [Azure Digital Twins service limits article](reference-service-limits.md). This can be useful while working with the service to understand the service's functional and rate limitations, as well as which limits can be adjusted if necessary.
+You can read about the **service limits** of Azure Digital Twins in the [Azure Digital Twins service limits article](reference-service-limits.md). This resource can be useful while working with the service to understand the service's functional and rate limitations, as well as which limits can be adjusted if necessary.
## Terminology
digital-twins Resources Compare Original Release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/resources-compare-original-release.md
description: Understand what has changed in the new version of Azure Digital Twins Previously updated : 1/28/2021 Last updated : 8/24/2021
The chart below provides a side-by-side view of concepts that have changed betwe
| Topic | In original version | In current version | | | | | | | **Modeling**<br>*More flexible* | The original release was designed around smart spaces, so it came with a built-in vocabulary for buildings. | The current Azure Digital Twins is domain-agnostic. You can define your own custom vocabulary and custom models for your solution, to represent more kinds of environments in more flexible ways.<br><br>Learn more in [Custom models](concepts-models.md). |
-| **Topology**<br>*More flexible*| The original release supported a tree data structure, tailored to smart spaces. Digital twins were connected with hierarchical relationships. | With the current release, your digital twins can be connected into arbitrary graph topologies, organized however you want. This gives you more flexibility to express the complex relationships of the real world.<br><br>Learn more in [Digital twins and the twin graph](concepts-twins-graph.md). |
-| **Compute**<br>*Richer, more flexible* | In the original release, logic for processing events and telemetry was defined in JavaScript user-defined functions (UDFs). Debugging with UDFs was limited. | The current release has an open compute model: you provide custom logic by attaching external compute resources like [Azure Functions](../azure-functions/functions-overview.md). This lets you use a programming language of your choice, access custom code libraries without restriction, and take advantage of development and debugging resources that the external service may have.<br><br>To see an end-to-end scenario driven by data flow through Azure functions, see [Connect an end-to-end solution](tutorial-end-to-end.md). |
-| **Device management with IoT Hub**<br>*More accessible* | The original release managed devices with an instance of [IoT Hub](../iot-hub/about-iot-hub.md) that was internal to the Azure Digital Twins service. This integrated hub was not fully accessible to developers. | In the current release, you "bring your own" IoT hub, by attaching an independently-created IoT Hub instance (along with any devices it already manages). This gives you full access to IoT Hub's capabilities and puts you in control of device management.<br><br>Learn more in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md). |
-| **Security**<br>*More standard* | The original release had pre-defined roles that you could use to manage access to your instance. | The current release integrates with the same [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) back-end service that other Azure services use. This may make it simpler to authenticate between other Azure services in your solution, like IoT Hub, Azure Functions, Event Grid, and more.<br>With RBAC, you can still use pre-defined roles, or you can build and configure custom roles.<br><br>Learn more in [Security for Azure Digital Twins solutions](concepts-security.md). |
+| **Topology**<br>*More flexible*| The original release supported a tree data structure, tailored to smart spaces. Digital twins were connected with hierarchical relationships. | With the current release, your digital twins can be connected into arbitrary graph topologies, organized however you want. This freedom gives you more flexibility to express the complex relationships of the real world.<br><br>Learn more in [Digital twins and the twin graph](concepts-twins-graph.md). |
+| **Compute**<br>*Richer, more flexible* | In the original release, logic for processing events and telemetry was defined in JavaScript user-defined functions (UDFs). Debugging with UDFs was limited. | The current release has an open compute model: you provide custom logic by attaching external compute resources like [Azure Functions](../azure-functions/functions-overview.md). This functionality lets you use a programming language of your choice, access custom code libraries without restriction, and take advantage of development and debugging resources that the external service may have.<br><br>To see an end-to-end scenario driven by data flow through Azure functions, see [Connect an end-to-end solution](tutorial-end-to-end.md). |
+| **Device management with IoT Hub**<br>*More accessible* | The original release managed devices with an instance of [IoT Hub](../iot-hub/about-iot-hub.md) that was internal to the Azure Digital Twins service. This integrated hub wasn't fully accessible to developers. | In the current release, you "bring your own" IoT hub, by attaching an independently created IoT Hub instance (along with any devices it already manages). This architecture gives you full access to IoT Hub's capabilities and puts you in control of device management.<br><br>Learn more in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md). |
+| **Security**<br>*More standard* | The original release had pre-defined roles that you could use to manage access to your instance. | The current release integrates with the same [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) back-end service that other Azure services use. This type of integration may make it simpler to authenticate between other Azure services in your solution, like IoT Hub, Azure Functions, Event Grid, and more.<br>With RBAC, you can still use pre-defined roles, or you can build and configure custom roles.<br><br>Learn more in [Security for Azure Digital Twins solutions](concepts-security.md). |
| **Scalability**<br>*Greater* | The original release had scale limitations for devices, messages, graphs, and scale units. Only one instance of Azure Digital Twins was supported per subscription. | The current release relies on a new architecture with improved scalability, and has greater compute power. It also supports 10 instances per region, per subscription.<br><br>See [Azure Digital Twins service limits](reference-service-limits.md) for details of the limits in the current release. | ## Service limits
digital-twins Troubleshoot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-diagnostics.md
description: See how to enable logging with diagnostics settings and query the logs for immediate viewing. Previously updated : 2/24/2021 Last updated : 8/24/2021 # Troubleshooting Azure Digital Twins: Diagnostics logging
-Azure Digital Twins can collect logs for your service instance to monitor its performance, access, and other data. You can use these logs to get an idea of what is happening in your Azure Digital Twins instance, and perform root-cause analysis on issues without needing to contact Azure support.
+Azure Digital Twins can collect logs for your service instance to monitor its performance, access, and other data. You can use these logs to get an idea of what is happening in your Azure Digital Twins instance, and analyze root causes on issues without needing to contact Azure support.
This article shows you how to [configure diagnostic settings](#turn-on-diagnostic-settings) in the [Azure portal](https://portal.azure.com) to start collecting logs from your Azure Digital Twins instance. You can also specify where the logs should be stored (such as Log Analytics or a storage account of your choice).
After setting up logs, you can also [query the logs](#view-and-query-logs) to qu
## Turn on diagnostic settings
-Turn on diagnostic settings to start collecting logs on your Azure Digital Twins instance. You can also choose the destination where the exported logs should be stored. Here is how to enable diagnostic settings for your Azure Digital Twins instance.
+Turn on diagnostic settings to start collecting logs on your Azure Digital Twins instance. You can also choose the destination where the exported logs should be stored. Here's how to enable diagnostic settings for your Azure Digital Twins instance.
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
Turn on diagnostic settings to start collecting logs on your Azure Digital Twins
- Archive to a storage account - Stream to an event hub
- You may be asked to fill in additional details if they are necessary for your destination selection.
+ You may be asked to fill in more details if they're necessary for your destination selection.
4. Save the new settings.
Here are more details about the categories of logs that Azure Digital Twins coll
| Log category | Description | | | |
-| ADTModelsOperation | Log all API calls pertaining to Models |
-| ADTQueryOperation | Log all API calls pertaining to Queries |
-| ADTEventRoutesOperation | Log all API calls pertaining to Event Routes as well as egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs and Service Bus |
-| ADTDigitalTwinsOperation | Log all API calls pertaining individual twins |
+| ADTModelsOperation | Log all API calls related to Models |
+| ADTQueryOperation | Log all API calls related to Queries |
+| ADTEventRoutesOperation | Log all API calls related to to Event Routes and egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs, and Service Bus |
+| ADTDigitalTwinsOperation | Log all API calls related to individual twins |
-Each log category consists of operations of write, read, delete, and action. These map to REST API calls as follows:
+Each log category consists of operations of write, read, delete, and action. These categories map to REST API calls as follows:
| Event type | REST API operations | | | |
Each log category consists of operations of write, read, delete, and action. Th
| Delete | DELETE | | Action | POST |
-Here is a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
+Here's a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
>[!NOTE] > Each log category contains several operations/REST API calls. In the table below, each log category maps to all operations/REST API calls underneath it until the next log category is listed.
Each log category has a schema that defines how events in that category are repo
### API log schemas
-This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, with the **exception** of the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, **except** the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
The schema contains information pertinent to API calls to an Azure Digital Twins instance.
Here are the field and property descriptions for API logs.
| `Time` | DateTime | The date and time that this event occurred, in UTC | | `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place | | `OperationName` | String | The type of action being performed during the event |
-| `OperationVersion` | String | The API Version utilized during the event |
+| `OperationVersion` | String | The API Version used during the event |
| `Category` | String | The type of resource being emitted | | `ResultType` | String | Outcome of the event | | `ResultSignature` | String | Http status code for the event |
Here are the field and property descriptions for API logs.
| `ApplicationId` | Guid | Application ID used in bearer authorization | | `Level` | Int | The logging severity of the event | | `Location` | String | The region where the event took place |
-| `RequestUri` | Uri | The endpoint utilized during the event |
+| `RequestUri` | Uri | The endpoint used during the event |
| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. | | `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. | | `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
-| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. | Below are example JSON bodies for these types of logs.
Below are example JSON bodies for these types of logs.
#### ADTEventRoutesOperation
-Here is an example JSON body for an `ADTEventRoutesOperation` that is **not** of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+Here's an example JSON body for an `ADTEventRoutesOperation` that is **not** of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
```json {
Here is an example JSON body for an `ADTEventRoutesOperation` that is **not** of
### Egress log schemas
-This is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details pertaining to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+The following example is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details related to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
|Field name | Data type | Description | |--||-|
This is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft
| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. | | `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. | | `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
-| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, etc. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. | | `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins |
Below are example JSON bodies for these types of logs.
#### ADTEventRoutesOperation for Microsoft.DigitalTwins/eventroutes/action
-Here is an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+Here's an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
```json {
Earlier in this article, you configured the types of logs to store and specified
To troubleshoot issue and generate insights from these logs, you can generate **custom queries**. To get started, you can also take advantage of a few example queries provided for you by the service, which address common questions that customers may have about their instance.
-Here is how to query the logs for your instance.
+Here's how to query the logs for your instance.
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
Here is how to query the logs for your instance.
:::image type="content" source="media/troubleshoot-diagnostics/logs.png" alt-text="Screenshot showing the Logs page for an Azure Digital Twins instance in the Azure portal with the Queries window overlaid, showing prebuilt queries." lightbox="media/troubleshoot-diagnostics/logs.png":::
- These are prebuilt example queries written for various logs. You can select one of the queries to load it into the query editor and run it to see these logs for your instance.
+ These queries are prebuilt examples written for various logs. You can select one of the queries to load it into the query editor and run it to see these logs for your instance.
You can also close the *Queries* window without running anything to go straight to the query editor page, where you can write or edit custom query code.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
description: Tutorial to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 4/15/2020 Last updated : 8/23/2021
In this tutorial, you will...
## Get started with the building scenario
-The sample project used in this tutorial represents a real-world **building scenario**, containing a floor, a room, and a thermostat device. These components will be digitally represented in an Azure Digital Twins instance, which will then be connected to [IoT Hub](../iot-hub/about-iot-hub.md), [Event Grid](../event-grid/overview.md), and two [Azure functions](../azure-functions/functions-overview.md) to facilitate movement of data.
+The sample project used in this tutorial represents a real-world **building scenario**, containing a floor, a room, and a thermostat device. These components will be digitally represented in an Azure Digital Twins instance, which will then be connected to [IoT Hub](../iot-hub/about-iot-hub.md), [Event Grid](../event-grid/overview.md), and two [Azure functions](../azure-functions/functions-overview.md) to enable movement of data.
Below is a diagram representing the full scenario.
-You will first create the Azure Digital Twins instance (**section A** in the diagram), then set up the telemetry data flow into the digital twins (**arrow B**), then set up the data propagation through the twin graph (**arrow C**).
+You'll first create the Azure Digital Twins instance (**section A** in the diagram), then set up the telemetry data flow into the digital twins (**arrow B**), then set up the data propagation through the twin graph (**arrow C**).
:::image type="content" source="media/tutorial-end-to-end/building-scenario.png" alt-text="Diagram of the full building scenario, which shows the data flowing from a device into and out of Azure Digital Twins through various Azure services.":::
-To work through the scenario, you will interact with components of the pre-written sample app you downloaded earlier.
+To work through the scenario, you'll interact with components of the pre-written sample app you downloaded earlier.
Here are the components implemented by the building scenario *AdtSampleApp* sample app: * Device authentication
SetupBuildingScenario
The output of this command is a series of confirmation messages as three [digital twins](concepts-twins-graph.md) are created and connected in your Azure Digital Twins instance: a floor named floor1, a room named room21, and a temperature sensor named thermostat67. These digital twins represent the entities that would exist in a real-world environment.
-They are connected via relationships into the following [twin graph](concepts-twins-graph.md). The twin graph represents the environment as a whole, including how the entities interact with and relate to each other.
+They're connected via relationships into the following [twin graph](concepts-twins-graph.md). The twin graph represents the environment as a whole, including how the entities interact with and relate to each other.
:::image type="content" source="media/tutorial-end-to-end/building-scenario-graph.png" alt-text="Diagram showing that floor1 contains room21, and room21 contains thermostat67." border="false":::
Query
> > :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins":::
-After this, you can stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial.
+You can now stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial.
## Set up the sample function app
The next step is setting up an [Azure Functions app](../azure-functions/function
* *ProcessHubToDTEvents*: processes incoming IoT Hub data and updates Azure Digital Twins accordingly * *ProcessDTRoutedData*: processes data from digital twins, and updates the parent twins in Azure Digital Twins accordingly
-In this section, you will publish the pre-written function app, and ensure the function app can access Azure Digital Twins by assigning it an Azure Active Directory (Azure AD) identity. Completing these steps will allow the rest of the tutorial to use the functions inside the function app.
+In this section, you'll publish the pre-written function app, and ensure the function app can access Azure Digital Twins by assigning it an Azure Active Directory (Azure AD) identity. Completing these steps will allow the rest of the tutorial to use the functions inside the function app.
Back in your Visual Studio window where the _**AdtE2ESample**_ project is open, the function app is located in the _**SampleFunctionsApp**_ project file. You can view it in the *Solution Explorer* pane.
In the *Solution Explorer* pane, expand _**SampleFunctionsApp** > Dependencies_.
:::image type="content" source="media/tutorial-end-to-end/update-dependencies-1.png" alt-text="Screenshot of Visual Studio showing the 'Manage NuGet Packages' menu button for the SampleFunctionsApp project." border="false":::
-This will open the NuGet Package Manager. Select the *Updates* tab and if there are any packages to be updated, check the box to *Select all packages*. Then select *Update*.
+Doing so will open the NuGet Package Manager. Select the *Updates* tab and if there are any packages to be updated, check the box to *Select all packages*. Then select *Update*.
:::image type="content" source="media/tutorial-end-to-end/update-dependencies-2.png" alt-text="Screenshot of Visual Studio showing how to selecting to update all packages in the NuGet Package Manager.":::
To publish the function app to Azure, you'll first need to create a storage acco
1. Create a zip of the published files that are located in the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
- If you're using PowerShell, you can do this by copying the full path to that *\publish* directory and pasting it into the following command:
+ If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
```powershell Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip
To publish the function app to Azure, you'll first need to create a storage acco
You've now published the functions to a function app in Azure.
-Next, for your function app to be able to access Azure Digital Twins, it will need to have permission to access your Azure Digital Twins instance. You'll configure this access in the next section.
+Next, your function app will need to have the right permission to access your Azure Digital Twins instance. You'll configure this access in the next section.
### Configure permissions for the function app
-There are two settings that need to be set for the function app to access your Azure Digital Twins instance. These can both be done using the Azure CLI.
+There are two settings that need to be set for the function app to access your Azure Digital Twins instance, both of which can be done using the Azure CLI.
#### Assign access role
The result of this command is outputted information about the role assignment yo
#### Configure application settings
-The second setting creates an **environment variable** for the function with the URL of your Azure Digital Twins instance. The function code will use this to refer to your instance. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
+The second setting creates an **environment variable** for the function with the URL of your Azure Digital Twins instance. The function code will use the value of this variable to refer to your instance. For more information about environment variables, see [Manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal).
Run the command below, filling in the placeholders with the details of your resources.
The output is the list of settings for the Azure Function, which should now cont
An Azure Digital Twins graph is meant to be driven by telemetry from real devices.
-In this step, you will connect a simulated thermostat device registered in [IoT Hub](../iot-hub/about-iot-hub.md) to the digital twin that represents it in Azure Digital Twins. As the simulated device emits telemetry, the data will be directed through the *ProcessHubToDTEvents* Azure function that triggers a corresponding update in the digital twin. In this way, the digital twin stays up to date with the real device's data. In Azure Digital Twins, the process of directing events data from one place to another is called [routing events](concepts-route-events.md).
+In this step, you'll connect a simulated thermostat device registered in [IoT Hub](../iot-hub/about-iot-hub.md) to the digital twin that represents it in Azure Digital Twins. As the simulated device emits telemetry, the data will be directed through the *ProcessHubToDTEvents* Azure function that triggers a corresponding update in the digital twin. In this way, the digital twin stays up to date with the real device's data. In Azure Digital Twins, the process of directing events data from one place to another is called [routing events](concepts-route-events.md).
-This happens in this part of the end-to-end scenario (**arrow B**):
+Processing the simulated telemetry happens in this part of the end-to-end scenario (**arrow B**):
:::image type="content" source="media/tutorial-end-to-end/building-scenario-b.png" alt-text="Diagram of an excerpt from the full building scenario diagram highlighting the section that shows elements before Azure Digital Twins.":::
-Here are the actions you will complete to set up this device connection:
+Here are the actions you'll complete to set up this device connection:
1. Create an IoT hub that will manage the simulated device 2. Connect the IoT hub to the appropriate Azure function by setting up an event subscription 3. Register the simulated device in IoT hub
Here are the actions you will complete to set up this device connection:
### Create an IoT Hub instance
-Azure Digital Twins is designed to work alongside [IoT Hub](../iot-hub/about-iot-hub.md), an Azure service for managing devices and their data. In this step, you will set up an IoT hub that will manage the sample device in this tutorial.
+Azure Digital Twins is designed to work alongside [IoT Hub](../iot-hub/about-iot-hub.md), an Azure service for managing devices and their data. In this step, you'll set up an IoT hub that will manage the sample device in this tutorial.
In Azure Cloud Shell, use this command to create a new IoT hub:
az iot hub create --name <name-for-your-IoT-hub> --resource-group <your-resource
The output of this command is information about the IoT hub that was created.
-Save the **name** that you gave to your IoT hub. You will use it later.
+Save the **name** that you gave to your IoT hub. You'll use it later.
### Connect the IoT hub to the Azure function Next, connect your IoT hub to the *ProcessHubToDTEvents* Azure function in the function app you published earlier, so that data can flow from the device in IoT Hub through the function, which updates Azure Digital Twins.
-To do this, you'll create an **Event Subscription** on your IoT Hub, with the Azure function as an endpoint. This "subscribes" the function to events happening in IoT Hub.
+To do so, you'll create an **Event Subscription** on your IoT Hub, with the Azure function as an endpoint. This "subscribes" the function to events happening in IoT Hub.
-In the [Azure portal](https://portal.azure.com/), navigate to your newly-created IoT hub by searching for its name in the top search bar. Select *Events* from the hub menu, and select *+ Event Subscription*.
+In the [Azure portal](https://portal.azure.com/), navigate to your newly created IoT hub by searching for its name in the top search bar. Select *Events* from the hub menu, and select *+ Event Subscription*.
:::image type="content" source="media/tutorial-end-to-end/event-subscription-1.png" alt-text="Screenshot of the Azure portal showing the IoT Hub event subscription.":::
-This will bring up the *Create Event Subscription* page.
+Selecting this option will bring up the *Create Event Subscription* page.
:::image type="content" source="media/tutorial-end-to-end/event-subscription-2.png" alt-text="Screenshot of the Azure portal showing how to create an event subscription.":::
-Fill in the fields as follows (fields filled by default are not mentioned):
+Fill in the fields as follows (fields filled by default aren't mentioned):
* *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *TOPIC DETAILS* > **System Topic Name**: Give a name to use for the system topic. * *EVENT TYPES* > **Filter to Event Types**: Select *Device Telemetry* from the menu options. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link, which will open a *Select Azure Function* window:
:::image type="content" source="media/tutorial-end-to-end/event-subscription-3.png" alt-text="Screenshot of the Azure portal event subscription showing the window to select an Azure function." border="false":::
- - Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*ProcessHubToDTEvents*). Some of these may auto-populate after selecting the subscription.
+ - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (*ProcessHubToDTEvents*). Some of these values may auto-populate after selecting the subscription.
- Select **Confirm Selection**. Back on the *Create Event Subscription* page, select **Create**. ### Register the simulated device with IoT Hub
-This section creates a device representation in IoT Hub with the ID thermostat67. The simulated device will connect into this, and this is how telemetry events will go from the device into IoT Hub, where the subscribed Azure function from the previous step is listening, ready to pick up the events and continue processing.
+This section creates a device representation in IoT Hub with the ID thermostat67. The simulated device will connect into this representation, which is how telemetry events will go from the device into IoT Hub. The IoT hub is where the subscribed Azure function from the previous step is listening, ready to pick up the events and continue processing.
In Azure Cloud Shell, create a device in IoT Hub with the following command:
Now, to see the results of the data simulation that you've set up, run the **Dev
:::image type="content" source="media/tutorial-end-to-end/start-button-simulator.png" alt-text="Screenshot of the Visual Studio start button with the DeviceSimulator project open.":::
-A console window will open and display simulated temperature telemetry messages. These are being sent to IoT Hub, where they are then picked up and processed by the Azure function.
+A console window will open and display simulated temperature telemetry messages. These messages are being sent to IoT Hub, where they're then picked up and processed by the Azure function.
:::image type="content" source="media/tutorial-end-to-end/console-simulator-telemetry.png" alt-text="Screenshot of the console output of the device simulator showing temperature telemetry being sent.":::
You should see the live updated temperatures *from your Azure Digital Twins inst
:::image type="content" source="media/tutorial-end-to-end/console-digital-twins-telemetry.png" alt-text="Screenshot of the console output showing log of temperature messages from digital twin thermostat67.":::
-Once you've verified this is working successfully, you can stop running both projects. Keep the Visual Studio windows open, as you'll continue using them in the rest of the tutorial.
+Once you've verified the live temperatures logging is working successfully, you can stop running both projects. Keep the Visual Studio windows open, as you'll continue using them in the rest of the tutorial.
## Propagate Azure Digital Twins events through the graph So far in this tutorial, you've seen how Azure Digital Twins can be updated from external device data. Next, you'll see how changes to one digital twin can propagate through the Azure Digital Twins graphΓÇöin other words, how to update twins from service-internal data.
-To do this, you'll use the *ProcessDTRoutedData* Azure function to update a Room twin when the connected Thermostat twin is updated. This happens in this part of the end-to-end scenario (**arrow C**):
+To do so, you'll use the *ProcessDTRoutedData* Azure function to update a Room twin when the connected Thermostat twin is updated. The update functionality happens in this part of the end-to-end scenario (**arrow C**):
:::image type="content" source="media/tutorial-end-to-end/building-scenario-c.png" alt-text="Diagram of an excerpt from the full building scenario diagram highlighting the section that shows the elements after Azure Digital Twins.":::
-Here are the actions you will complete to set up this data flow:
-1. [Create an event grid topic](#create-the-event-grid-topic) to facilitate movement of data between Azure services
+Here are the actions you'll complete to set up this data flow:
+1. [Create an event grid topic](#create-the-event-grid-topic) to enable movement of data between Azure services
1. [Create an endpoint](#create-the-endpoint) in Azure Digital Twins that connects the instance to the event grid topic 1. [Set up a route](#create-the-route) within Azure Digital Twins that sends twin property change events to the endpoint 1. [Set up an Azure function](#connect-the-azure-function) that listens on the event grid topic at the endpoint, receives the twin property change events that are sent there, and updates other twins in the graph accordingly
Here are the actions you will complete to set up this data flow:
Next, subscribe the *ProcessDTRoutedData* Azure function to the event grid topic you created earlier, so that telemetry data can flow from the thermostat67 twin through the event grid topic to the function, which goes back into Azure Digital Twins and updates the room21 twin accordingly.
-To do this, you'll create an **Event Grid subscription** that sends data from the **event grid topic** that you created earlier to your *ProcessDTRoutedData* Azure function.
+To do so, you'll create an **Event Grid subscription** that sends data from the **event grid topic** that you created earlier to your *ProcessDTRoutedData* Azure function.
In the [Azure portal](https://portal.azure.com/), navigate to your event grid topic by searching for its name in the top search bar. Select *+ Event Subscription*.
In the [Azure portal](https://portal.azure.com/), navigate to your event grid to
The steps to create this event subscription are similar to when you subscribed the first Azure function to IoT Hub earlier in this tutorial. This time, you don't need to specify *Device Telemetry* as the event type to listen for, and you'll connect to a different Azure function.
-On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default are not mentioned):
+On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default aren't mentioned):
* *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
- - Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*ProcessDTRoutedData*). Some of these may auto-populate after selecting the subscription.
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link, which will open a *Select Azure Function* window:
+ - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (*ProcessDTRoutedData*). Some of these values may auto-populate after selecting the subscription.
- Select **Confirm Selection**. Back on the *Create Event Subscription* page, select **Create**.
You should see the live updated temperatures *from your Azure Digital Twins inst
:::image type="content" source="media/tutorial-end-to-end/console-digital-twins-telemetry-b.png" alt-text="Screenshot of the console output showing a log of temperature messages, from a thermostat and a room.":::
-Once you've verified this is working successfully, you can stop running both projects. You can also close the Visual Studio windows, as the tutorial is now complete.
+Once you've verified the live temperatures logging from your instance is working successfully, you can stop running both projects. You can also close the Visual Studio windows, as the tutorial is now complete.
## Review
-Here is a review of the scenario that you built out in this tutorial.
+Here's a review of the scenario that you built out in this tutorial.
1. An Azure Digital Twins instance digitally represents a floor, a room, and a thermostat (represented by **section A** in the diagram below) 2. Simulated device telemetry is sent to IoT Hub, where the *ProcessHubToDTEvents* Azure function is listening for telemetry events. The *ProcessHubToDTEvents* Azure function uses the information in these events to set the *Temperature* property on thermostat67 (**arrow B** in the diagram).
After completing this tutorial, you can choose which resources you want to remov
* **If you want to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt?view=azure-cli-latest&preserve-view=true) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you want to remove.
- This option will not remove any of the other Azure resources created in this tutorial (IoT Hub, Azure Functions app, etc.). You can delete these individually using the [dt commands](/cli/azure/reference-index?view=azure-cli-latest&preserve-view=true) appropriate for each resource type.
+ This option won't remove any of the other Azure resources created in this tutorial (IoT Hub, Azure Functions app, and so on). You can delete these individually using the [dt commands](/cli/azure/reference-index?view=azure-cli-latest&preserve-view=true) appropriate for each resource type.
You may also want to delete the project folder from your local machine.
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
Last updated 01/03/2021
# Tutorial: Migrate SQL Server to Azure SQL Database using DMS
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [Adventureworks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
+You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
You will learn how to: > [!div class="checklist"]
To complete this tutorial, you need to:
- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads). - Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
+- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)
- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](../azure-sql/database/single-database-create-quickstart.md). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish. > [!NOTE]
To complete this tutorial, you need to:
> >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md). -- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). - Open your Windows firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. - If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
To complete this tutorial, you need to:
Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows: 1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name. From the **Assessment type** drop down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
+2. Specify a project name. From the **Assessment type** drop-down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
Before you can migrate data from a SQL Server instance to a single database or p
3. In the Data Migration Assistant, on the **Options** screen, select **Next**. 4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **Adventureworks2016**, select **Add**, and then select **Start Assessment**.
+5. In the **Add sources** dialog box, select **AdventureWorks2016**, select **Add**, and then select **Start Assessment**.
> [!NOTE] > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
After you're comfortable with the assessment and satisfied that the selected dat
> [!IMPORTANT] > If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-To migrate the **Adventureworks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
+To migrate the **AdventureWorks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**. 2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
To migrate the **Adventureworks2016** schema to a single database or pooled data
![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png) 4. Select **Create** to create the project.
-5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **Adventureworks2016** database.
+5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2016** database.
![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
To migrate the **Adventureworks2016** schema to a single database or pooled data
![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **Adventureworks2016** database that need to be deployed to Azure SQL Database.
+7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2016** database that need to be deployed to Azure SQL Database.
By default, all objects are selected.
To migrate the **Adventureworks2016** schema to a single database or pooled data
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
-## Create an instance
+## Create an Azure Database Migration Service instance
1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
To migrate the **Adventureworks2016** schema to a single database or pooled data
![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-azure-sql/dms-settings-3.png)
- - Select **Review + Create** to create the service.
+ - Select **Review + Create** to review the details and then select **Create** to create the service.
## Create a migration project
After the service is created, locate it within the Azure portal, open it, and th
## Select databases for migration
-Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with migration. If migration downtime not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or contacting the [DMS team](mailto:DMSFeedback@microsoft.com) for other options.
+Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or contacting the [DMS team](mailto:DMSFeedback@microsoft.com) for other options.
1. Choose the database(s) you want to migrate from the list of available databases. 1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
Previously updated : 01/08/2020 Last updated : 08/16/2021 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
-In this tutorial, you migrate the **Adventureworks2012** database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
+In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
-In this tutorial, you learn how to:
+You will learn how to:
> [!div class="checklist"] >
+> - Register the Azure DataMigration resource provider.
> - Create an instance of Azure Database Migration Service. > - Create a migration project by using Azure Database Migration Service. > - Run the migration. > - Monitor the migration.
-> - Download a migration report.
> [!IMPORTANT] > For offline migrations from SQL Server to SQL Managed Instance, Azure Database Migration Service can create the backup files for you. Alternately, you can provide the latest full database backup in the SMB network share that the service will use to migrate your databases. Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups into a single backup media is not supported. Note that you can use compressed backups as well, to reduce the likelihood of experiencing potential issues with migrating large backups.
This article describes an offline migration from SQL Server to a SQL Managed Ins
To complete this tutorial, you need to:
+- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).
+- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
+- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)
- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). [Learn network topologies for SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity. -- Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
- Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). - Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. - If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
To complete this tutorial, you need to:
## Create an Azure Database Migration Service instance
-1. In the Azure portal, select + **Create a resource**, search for **Azure Database Migration Service**, and then select **Azure Database Migration Service** from the drop-down list.
+1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
![Azure Marketplace](media/tutorial-sql-server-to-managed-instance/portal-marketplace.png) 2. On the **Azure Database Migration Service** screen, select **Create**.
- ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-managed-instance/dms-create1.png)
+ ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-managed-instance/dms-create-service-1.png)
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
+3. On the **Create Migration Service** basics screen:
-4. Select the location in which you want to create the instance of DMS.
+ - Select the subscription.
+ - Create a new resource group or choose an existing one.
+ - Specify a name for the instance of the Azure Database Migration Service.
+ - Select the location in which you want to create the instance of Azure Database Migration Service.
+ - Choose **Azure** as the service mode.
+ - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-5. Select an existing virtual network or create one.
+ ![Configure Azure Database Migration Service instance basics settings](media/tutorial-sql-server-to-managed-instance/dms-create-service-2.png)
- The virtual network provides Azure Database Migration Service with access to the source SQL Server and target SQL Managed Instance.
+ - Select **Next: Networking**.
- For more information on how to create a virtual network in Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+4. On the **Create Migration Service** networking screen:
- For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
-6. Select a pricing tier.
-
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Managed Instance.
+
+ - For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+ - For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
- ![Create DMS Service](media/tutorial-sql-server-to-managed-instance/dms-create-service2.png)
+ ![Configure Azure Database Migration Service instance networking settings](media/tutorial-sql-server-to-managed-instance/dms-create-service-3.png)
-7. Select **Create** to create the service.
+ - Select **Review + Create** to review the details and then select **Create** to create the service.
## Create a migration project After an instance of the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-search.png)
-2. On the **Azure Database Migration Service** screen, search for the name of the instance that you created, and then select the instance.
+2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-3. Select + **New Migration Project**.
+3. Select **New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Managed Instance**, and then for **Choose type of activity**, select **Offline data migration**.
+ ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-create-project-1.png)
- ![Create DMS Project](media/tutorial-sql-server-to-managed-instance/dms-create-project2.png)
+4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database Managed Instance**, and then for **Choose type of activity**, select **Offline data migration**.
-5. Select **Create** to create the project.
+ ![Create Database Migration Service Project](media/tutorial-sql-server-to-managed-instance/dms-create-project-2.png)
+
+5. Select **Create and run activity** to create the project and run the migration activity.
## Specify source details
-1. On the **Migration source detail** screen, specify the connection details for the source SQL Server.
+1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
+
+ Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
2. If you haven't installed a trusted certificate on your server, select the **Trust server certificate** check box.
After an instance of the service is created, locate it within the Azure portal,
> [!CAUTION] > TLS connections that are encrypted using a self-signed certificate does not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
- ![Source Details](media/tutorial-sql-server-to-managed-instance/dms-source-details1.png)
+ ![Source Details](media/tutorial-sql-server-to-managed-instance/dms-source-details.png)
-3. Select **Save**.
-
-4. On the **Select source databases** screen, select the **Adventureworks2012** database for migration.
-
- ![Select Source Databases](media/tutorial-sql-server-to-managed-instance/dms-source-database1.png)
-
- > [!IMPORTANT]
- > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-5. Select **Save**.
+3. Select **Next: Select target**
## Specify target details
-1. On the **Migration target details** screen, specify the connection details for the target, which is the pre-provisioned SQL Managed Instance to which you're migrating the **AdventureWorks2012** database.
+1. On the **Select target** screen, specify the connection details for the target, which is the pre-provisioned SQL Managed Instance to which you're migrating the **AdventureWorks2016** database.
If you haven't already provisioned the SQL Managed Instance, select the [link](../azure-sql/managed-instance/instance-create-quickstart.md) to help you provision the instance. You can still continue with project creation and then, when the SQL Managed Instance is ready, return to this specific project to execute the migration.
- ![Select Target](media/tutorial-sql-server-to-managed-instance/dms-target-details2.png)
+ ![Select Target](media/tutorial-sql-server-to-managed-instance/dms-target-details.png)
-2. Select **Save**.
+2. Select **Next: Select databases**. On the **Select databases** screen, select the **AdventureWorks2016** database for migration.
-## Select source databases
+ ![Select Source Databases](media/tutorial-sql-server-to-managed-instance/dms-source-database.png)
-1. On the **Select source databases** screen, select the source database that you want to migrate.
-
- ![Select source databases](media/tutorial-sql-server-to-managed-instance/select-source-databases.png)
+ > [!IMPORTANT]
+ > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-2. Select **Save**.
+3. Select **Next: Select logins**
## Select logins
After an instance of the service is created, locate it within the Azure portal,
>[!NOTE] >By default, Azure Database Migration Service only supports migrating SQL logins. To enable support for migrating Windows logins, see the **Prerequisites** section of this tutorial.
- ![Select logins](media/tutorial-sql-server-to-managed-instance/select-logins.png)
+ ![Select logins](media/tutorial-sql-server-to-managed-instance/dms-select-logins.png)
-2. Select **Save**.
+2. Select **Next: Configure migration settings**.
## Configure migration settings
-1. On the **Configure migration settings** screen, provide the following detail:
+1. On the **Configure migration settings** screen, provide the following details:
| Parameter | Description | |--||
After an instance of the service is created, locate it within the Azure portal,
|**Storage account settings** | The SAS URI that provides Azure Database Migration Service with access to your storage account container to which the service uploads the backup files and that is used for migrating databases to SQL Managed Instance. [Learn how to get the SAS URI for blob container](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container). This SAS URI must be for the blob container, not for the storage account.| |**TDE Settings** | If you're migrating the source databases with Transparent Data Encryption (TDE) enabled, you need to have write privileges on the target SQL Managed Instance. Select the subscription in which the SQL Managed Instance provisioned from the drop-down menu. Select the target **Azure SQL Database Managed Instance** in the drop-down menu. |
- ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance/dms-configure-migration-settings3.png)
+ ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance/dms-configure-migration-settings.png)
-2. Select **Save**.
+2. Select **Next: Summary**.
## Review the migration summary
-1. On the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-2. Expand the **Validation option** section to display the **Choose validation option** screen, specify whether to validate the migrated database for query correctness, and then select **Save**.
+1. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-3. Review and verify the details associated with the migration project.
+2. Review and verify the details associated with the migration project.
- ![Migration project summary](media/tutorial-sql-server-to-managed-instance/dms-project-summary2.png)
-
-4. Select **Save**.
+ ![Migration project summary](media/tutorial-sql-server-to-managed-instance/dms-project-summary.png)
## Run the migration -- Select **Run migration**.
+- Select **Start migration**.
- The migration activity window appears, and the status of the activity is **Pending**.
+ The migration activity window appears that displays the current migration status of the databases and logins.
## Monitor the migration 1. In the migration activity screen, select **Refresh** to update the display.
- ![Screenshot that shows the migration activity screen and the Refresh button.](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration1.png)
+ ![Screenshot that shows the migration activity screen and the Refresh button.](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration.png)
- You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
+2. You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
![Migration activity in progress](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration-extend.png)
-2. After the migration completes, select **Download report** to get a report listing the details associated with the migration process.
-
-3. Verify that the target database on the target SQL Managed Instance environment.
+3. After the migration completes, verify the target database on the SQL Managed Instance environment.
-## Next steps
+## Additional resources
- For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](../azure-sql/managed-instance/restore-sample-database-quickstart.md). - For information about SQL Managed Instance, see [What is SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md).
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
This article builds on the information in the [overview article](./event-hubs-ab
## Namespace
-An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides DNS integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md).
+An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides DNS-integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md).
:::image type="content" source="./media/event-hubs-features/namespace.png" alt-text="Image showing an Event Hubs namespace":::
You can publish an event via AMQP 1.0, the Kafka protocol, or HTTPS. The Event H
The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP requires the establishment of a persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. AMQP has higher network costs when initializing the session, however HTTPS requires additional TLS overhead for every request. AMQP has significantly higher performance for frequent publishers and can achieve much lower latencies when used with asynchronous publishing code.
-You can publish events individually or batched. A single publication has a limit of 1 MB, regardless of whether it is a single event or a batch. Publishing events larger than this threshold will be rejected.
+You can publish events individually or batched. A single publication has a limit of 1 MB, regardless of whether it's a single event or a batch. Publishing events larger than this threshold will be rejected.
-Event Hubs throughput is scaled by using partitions and throughput-unit allocations (see below). It is a best practice for publishers to remain unaware of the specific partitioning model chosen for an event hub and to only specify a *partition key* that is used to consistently assign related events to the same partition.
+Event Hubs throughput is scaled by using partitions and throughput-unit allocations (see below). It's a best practice for publishers to remain unaware of the specific partitioning model chosen for an event hub and to only specify a *partition key* that is used to consistently assign related events to the same partition.
![Partition keys](./media/event-hubs-features/partition_keys.png)
similar stores and analytics platforms.
The reason for Event Hubs' limit on data retention based on time is to prevent large volumes of historic customer data getting trapped in a deep store that is only indexed by a timestamp and only allows for sequential access. The
-architectural philosophy here is that historic data needs richer indexing and
+architectural philosophy here's that historic data needs richer indexing and
more direct access than the real-time eventing interface that Event Hubs or
-Kafka provide. Event stream engines are not well suited to play the role of data
+Kafka provide. Event stream engines aren't well suited to play the role of data
lakes or long-term archives for event sourcing.
lakes or long-term archives for event sourcing.
> Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a > permanent store for infinitely held event streams. >
-> The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream. Inspection of event payloads and indexing are not within the feature scope of Event Hubs (or Apache Kafka). Databases and specialized analytics stores and engines such as [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) and [Azure Synapse](../synapse-analytics/overview-what-is.md) are therefore far better suited for storing historic events.
+> The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream. Inspection of event payloads and indexing aren't within the feature scope of Event Hubs (or Apache Kafka). Databases and specialized analytics stores and engines such as [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) and [Azure Synapse](../synapse-analytics/overview-what-is.md) are therefore far better suited for storing historic events.
> > [Event Hubs Capture](event-hubs-capture-overview.md) integrates directly with Azure Blob Storage and Azure Data Lake Storage and, through that integration, also enables [flowing events directly into Azure Synapse](store-captured-data-data-warehouse.md). >
You don't have to create publisher names ahead of time, but they must match the
## Capture
-[Event Hubs Capture](event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs and save it to your choice of either a Blob storage account, or an Azure Data Lake Service account. You can enable capture from the Azure portal, and specify a minimum size and time window to perform the capture. Using Event Hubs Capture, you specify your own Azure Blob Storage account and container, or Azure Data Lake Storage account, one of which is used to store the captured data. Captured data is written in the Apache Avro format.
+[Event Hubs Capture](event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs and save it to your choice of either a Blob storage account, or an Azure Data Lake Storage account. You can enable capture from the Azure portal, and specify a minimum size and time window to perform the capture. Using Event Hubs Capture, you specify your own Azure Blob Storage account and container, or Azure Data Lake Storage account, one of which is used to store the captured data. Captured data is written in the Apache Avro format.
:::image type="content" source="./media/event-hubs-features/capture.png" alt-text="Image showing capturing of Event Hubs data into Azure Storage or Azure Data Lake Storage":::
Any entity that reads event data from an event hub is an *event consumer*. All E
The publish/subscribe mechanism of Event Hubs is enabled through *consumer groups*. A consumer group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets.
-In a stream processing architecture, each downstream application equates to a consumer group. If you want to write event data to long-term storage, then that storage writer application is a consumer group. Complex event processing can then be performed by another, separate consumer group. You can only access partitions through a consumer group. There is always a default consumer group in an event hub, and you can create up to the [maximum number of consumer groups](event-hubs-quotas.md) for the corresponding pricing tier.
+In a stream processing architecture, each downstream application equates to a consumer group. If you want to write event data to long-term storage, then that storage writer application is a consumer group. Complex event processing can then be performed by another, separate consumer group. You can only access partitions through a consumer group. There's always a default consumer group in an event hub, and you can create up to the [maximum number of consumer groups](event-hubs-quotas.md) for the corresponding pricing tier.
-There can be at most 5 concurrent readers on a partition per consumer group; however **it is recommended that there is only one active receiver on a partition per consumer group**. Within a single partition, each reader receives all of the messages. If you have multiple readers on the same partition, then you process duplicate messages. You need to handle this in your code, which may not be trivial. However, it's a valid approach in some scenarios.
+There can be at most 5 concurrent readers on a partition per consumer group; however **it's recommended that there's only one active receiver on a partition per consumer group**. Within a single partition, each reader receives all of the messages. If you have multiple readers on the same partition, then you process duplicate messages. You need to handle this in your code, which may not be trivial. However, it's a valid approach in some scenarios.
Some clients offered by the Azure SDKs are intelligent consumer agents that automatically manage the details of ensuring that each partition has a single reader and that all partitions for an event hub are being read from. This allows your code to focus on processing the events being read from the event hub so it can ignore many of the details of the partitions. For more information, see [Connect to a partition](#connect-to-a-partition).
An *offset* is the position of an event within a partition. You can think of an
*Checkpointing* is a process by which readers mark or commit their position within a partition event sequence. Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer group. This responsibility means that for each consumer group, each partition reader must keep track of its current position in the event stream, and can inform the service when it considers the data stream complete.
-If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was previously submitted by the last reader of that partition in that consumer group. When the reader connects, it passes the offset to the event hub to specify the location at which to start reading. In this way, you can use checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a failover between readers running on different machines occurs. It is possible to return to older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and event stream replay.
+If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was previously submitted by the last reader of that partition in that consumer group. When the reader connects, it passes the offset to the event hub to specify the location at which to start reading. In this way, you can use checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a failover between readers running on different machines occurs. It's possible to return to older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and event stream replay.
> [!IMPORTANT]
-> Offsets are provided by the Event Hubs service. It is the responsibility of the consumer to checkpoint as events are processed.
+> Offsets are provided by the Event Hubs service. It's the responsibility of the consumer to checkpoint as events are processed.
> [!NOTE] > If you are using Azure Blob Storage as the checkpoint store in an environment that supports a different version of Storage Blob SDK than those typically available on Azure, you'll need to use code to change the Storage service API version to the specific version supported by that environment. For example, if you are running [Event Hubs on an Azure Stack Hub version 2002](/azure-stack/user/event-hubs-overview), the highest available version for the Storage service is version 2017-11-09. In this case, you need to use code to target the Storage service API version to 2017-11-09. For an example on how to target a specific Storage API version, see these samples on GitHub:
Event data:
* User properties * System properties
-It is your responsibility to manage the offset.
+It's your responsibility to manage the offset.
## Next steps
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | 10G | Colt, Equinix, NTT Global DataCenters EMEA| | **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | 10G | Equinix | | **Busan** | [LG CNS](https://www.lgcns.com/En/Service/DataCenter) | 2 | Korea South | n/a | LG CNS |
-| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | |
+| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | Ascenty |
| **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC | | **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix | | **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom | | **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea | | **Chennai2** | Airtel | 2 | South India | 10G | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
-| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | 10G, 100G | |
+| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | 10G, 100G | CoreSite |
| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | 10G, 100G | CoreSite, Megaport, PacketFabric, Zayo | | **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport |
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | |
+| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | Interxion |
| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems | | **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, Megaport, Swisscom |
The following table shows connectivity locations and the service providers for e
| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, Cologix, Fibrenoire, Megaport, Telus, Zayo | | **Mumbai** | Tata Communications | 2 | West India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon | | **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Vodafone Idea |
-| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | DE-CIX, Megaport |
+| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | Colt, DE-CIX, Megaport |
| **New York** | [Equinix NY9](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny9/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Colt, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo | | **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | 10G, 100G | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data | | **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | 10G, 100G | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
The following table shows connectivity locations and the service providers for e
| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | 10G | Megaport, NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | n/a | 10G, 100G | Megaport, Zayo |
-| **Pune** | STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| 10G | |
-| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | 10G, 100G | Bell Canada, Megaport, Telus |
+| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| 10G | |
+| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | 10G, 100G | Bell Canada, Equinix, Megaport, Telus |
| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | 10G | Transtelco| | **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | 10G, 100G | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | 10G | Equinix |
The following table shows connectivity locations and the service providers for e
| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | 10G, 100G | | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | 10G, 100G | Aryaka Networks, Equinix, Level 3 Communications, Megaport, Telus, Zayo | | **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | 10G, 100G | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
+| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | |
| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | 10G, 100G | Colt, Coresite | | **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | China Unicom Global, Colt, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | CenturyLink Cloud Connect, China Unicom Global, Colt, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | 10G, 100G |GlobalConnect, Megaport | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | 10G | Equinix, Telia Carrier | | **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | 10G, 100G | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | 10G, 100G | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | 10G | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | 10G, 100G | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Verizon |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | 10G, 100G | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon |
| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, China Unicom Global, Megaport, Tokai Communications | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | 10G, 100G | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | 10G, 100G | |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Cologix, Megaport, Telus |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Bell Canada, Cologix, Megaport, Telus |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | 1 | East US, East US 2 | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | 10G, 100G | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2, Mumbai2 | | **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok | | **[Aryaka Networks](https://www.aryaka.com/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, Sao Paulo, Seattle, Silicon Valley, Singapore, Tokyo, Washington DC |
-| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported |Sao Paulo |
+| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported | Campinas, Sao Paulo |
| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 | | **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2, London2 | | **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo | | **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported |Cape Town, Johannesburg|
-| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City |
+| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City, Vancouver |
| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai | | **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported |Miami | | **CDC** | Supported | Supported | Canberra, Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, New York, Paris, San Antonio, Silicon Valley, Tokyo, Toronto, Washington DC, Washington DC2 |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, New York, Paris, San Antonio, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported |Hong Kong, Taipei | | **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported |Hong Kong, Hong Kong2 |
The following table shows locations by service provider. If you want to view ava
| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported |Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported |Miami | | **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported |Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
-| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported |Chicago, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
+| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported |Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported |Amsterdam2, Dubai2, Frankfurt, Marseille, Mumbai, Munich, New York | | **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney | | **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported |Frankfurt |
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **eir** |Supported |Supported |Dublin| | **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported |Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported |Dubai| | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported |Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported |Taipei|
The following table shows locations by service provider. If you want to view ava
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Frankfurt, London, Madrid, Marseille, Paris, Zurich |
+| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, Madrid, Marseille, Paris, Zurich |
| **[IRIDEOS](https://irideos.it/)** |Supported |Supported |Milan | | **Iron Mountain** | Supported |Supported |Washington DC | | **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**|Supported |Supported | Amsterdam, London2, Silicon Valley, Toronto, Washington DC |
The following table shows locations by service provider. If you want to view ava
| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported |Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported |Amsterdam, Sao Paulo |
-| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2 |
+| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2, Tokyo |
| **Telenor** |Supported |Supported |Amsterdam, London, Oslo | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported |Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC | | **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported |Jakarta |
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
the state of the machine.
1. Last, the provider runs `Get` to return the current state of each setting so details are available both about why a machine isn't compliant and to confirm that the current state is compliant.
-
+ ## Trigger Set from outside machine A challenge in previous versions of DSC has been correcting drift at scale
returned as a string value for the **Phrase** property.
$reasons = @() $reasons += @{ Code = 'Name:Name:ReasonIdentifer'
- Phrase = 'Explain why the setting isn't compliant'
+ Phrase = 'Explain why the setting is not compliant'
} return @{ reasons = $reasons
class Example {
[Example] Get() { # return current current state }
-
+ [void] Set() { # set the state }
hdinsight Hdinsight Os Patching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-os-patching.md
description: Learn how to configure OS patching schedule for Linux-based HDInsig
Previously updated : 01/21/2020 Last updated : 08/30/2021 # Configure the OS patching schedule for Linux-based HDInsight clusters
Last updated 01/21/2020
> [!IMPORTANT] > Ubuntu images become available for new Azure HDInsight cluster creation within three months of being published. Running clusters aren't auto-patched. Customers must use script actions or other mechanisms to patch a running cluster. As a best practice, you can run these script actions and apply security updates right after the cluster creation.
-HDInsight provides support for you to perform common tasks on your cluster such as installing OS patches, security updates, and rebooting nodes. These tasks are accomplished using the following two scripts that can be run as [script actions](hdinsight-hadoop-customize-cluster-linux.md), and configured with parameters:
+HDInsight provides support for you to perform common tasks on your cluster such as installing OS patches, OS security updates, and rebooting nodes. These tasks are accomplished using the following two scripts that can be run as [script actions](hdinsight-hadoop-customize-cluster-linux.md), and configured with parameters:
- `schedule-reboots.sh` - Do an immediate restart, or schedule a restart on the cluster nodes. - `install-updates-schedule-reboots.sh` - Install all updates, only kernel + security updates, or only kernel updates.
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Verify that IoT Edge for Linux on Windows was successfully installed and configu
+When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
+ ## Next steps * Continue to [deploy IoT Edge modules](how-to-deploy-modules-portal.md) to learn how to deploy modules onto your device.
iot-edge How To Install Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge.md
View all the modules running on your IoT Edge device. When the service starts fo
sudo iotedge list ```
+When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
+ ## Offline or specific version installation (optional) The steps in this section are for scenarios not covered by the standard installation steps. This may include:
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
Follow these steps to start the **Set Modules** wizard to deploy your first modu
1. Select the device ID of the target device from the list of devices.
+ When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
+ 1. On the upper bar, select **Set Modules**. ![Screenshot that shows selecting Set Modules.](./media/quickstart/select-set-modules.png)
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart.md
Follow these steps to deploy your first module from Azure Marketplace.
1. Select the device ID of the target device from the list of devices.
+ When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
++ 1. On the upper bar, select **Set Modules**. ![Screenshot that shows selecting Set Modules.](./media/quickstart/select-set-modules.png)
logic-apps Logic Apps Azure Resource Manager Templates Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-azure-resource-manager-templates-overview.md
Title: Overview - Automate deployment for Azure Logic Apps
description: Learn about Azure Resource Manager templates to automate deployment for Azure Logic Apps ms.suite: integration-+ Last updated 11/06/2020
This example template shows how you can complete these tasks by defining secured
// End workflow definition // Start workflow definition parameter values "parameters": {
- "authenticationType": "[parameters('TemplateAuthenticationType')]", // Template parameter reference
- "fabrikamPassword": "[parameters('TemplateFabrikamPassword')]", // Template parameter reference
- "fabrikamUserName": "[parameters('TemplateFabrikamUserName')]" // Template parameter reference
+ "authenticationType": {
+ "value": "[parameters('TemplateAuthenticationType')]" // Template parameter reference
+ },
+ "fabrikamPassword": {
+ "value": "[parameters('TemplateFabrikamPassword')]" // Template parameter reference
+ },
+ "fabrikamUserName": {
+ "value": "[parameters('TemplateFabrikamUserName')]" // Template parameter reference
+ }
}, "accessControl": {} },
Here is the parameterized sample template that's used by this topic's examples:
## Next steps > [!div class="nextstepaction"]
-> [Create logic app templates](../logic-apps/logic-apps-create-azure-resource-manager-templates.md)
+> [Create logic app templates](../logic-apps/logic-apps-create-azure-resource-manager-templates.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 08/27/2021 Last updated : 08/30/2021 # Limits and configuration reference for Azure Logic Apps
This section lists the outbound IP addresses for the Azure Logic Apps service. I
| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248 | | South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51 | | South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191 |
-| South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225 |
+| South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225, 13.73.244.160 - 13.73.244.191 |
| South India | 52.172.50.24, 52.172.55.231, 52.172.52.0, 104.211.229.115, 104.211.230.129, 104.211.230.126, 104.211.231.39, 104.211.227.229 | | Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128 | | Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210 |
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
When you create a new workspace, it automatically creates several Azure resource
> By default, the storage account is a general-purpose v1 account. You can [upgrade this to general-purpose v2](../storage/common/storage-account-upgrade.md) after the workspace has been created. > Do not enable hierarchical namespace on the storage account after upgrading to general-purpose v2.
- To use an existing Azure Storage account, it cannot be of type BlobStorage or a premium account (Premium_LRS and Premium_GRS). It also cannot have a hierarchical namespace (used with Azure Data Lake Storage Gen2). Neither premium storage or hierarchical namespaces are supported with the _default_ storage account of the workspace. You can use premium storage or hierarchical namespace with _non-default_ storage accounts.
+ To use an existing Azure Storage account, it cannot be of type BlobStorage or a premium account (Premium_LRS and Premium_GRS). It also cannot have a hierarchical namespace (used with Azure Data Lake Storage Gen2). Neither premium storage nor hierarchical namespaces are supported with the _default_ storage account of the workspace. You can use premium storage or hierarchical namespace with _non-default_ storage accounts.
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that you use during training and when you deploy a model. To minimize costs, ACR is **lazy-loaded** until deployment images are created.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Previously updated : 08/06/2021 Last updated : 08/30/2021 # Create and manage an Azure Machine Learning compute instance
Script arguments can be referred to in the script as $1, $2, etc.
If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel, you will have to put it within *sudo -u azureuser* block like this
-```shell
-#!/bin/bash
-set -e
-
-# This script installs a pip package in compute instance azureml_py38 environment
-
-sudo -u azureuser -i <<'EOF'
-# PARAMETERS
-PACKAGE=numpy
-ENVIRONMENT=azureml_py38
-conda activate "$ENVIRONMENT"
-pip install "$PACKAGE"
-conda deactivate
-EOF
-```
The command *sudo -u azureuser* changes the current working directory to */home/azureuser*. You also can't access the script arguments in this block.
+For other example scripts, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/setup-ci).
+ You can also use the following environment variables in your script: 1. CI_RESOURCE_GROUP
media-services Configure Connect Nodejs Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/configure-connect-nodejs-howto.md
You will work with some files in Azure Samples. Clone the Node.JS samples reposi
git clone https://github.com/Azure-Samples/media-services-v3-node-tutorials.git ```
-## Install the packages
+## Install the Node.js packages
### Install @azure/arm-mediaservices
For this example, you will use the following packages in the `package.json` file
## Connect to Node.js client using TypeScript -- ### Sample *.env* file Copy the content of this file to a file named *.env*. It should be stored at the root of your working repository. These are the values you got from the API Access page for your Media Services account in the portal.
+To access the values needed for entering into the *.env* file, it is recommended to first read and review the how-to article [Access the API](./access-api-howto.md).
+You can use either the Azure portal or the CLI to get the values needed to enter into this sample's environment variables file.
+ Once you have created the *.env* file, you can start working with the samples. ```nodejs
DRM_SYMMETRIC_KEY="add random base 64 encoded string here"
cd AMSv3Samples ```
-2. Install the packages used in the *packages.json* file.
+2. Install the packages used in the *package.json* file.
``` npm install
media-services Media Services Sspk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-sspk.md
Interim and Final SSPK licensees can submit technical questions to [smoothpk@mic
* FAIRWIT HONGKONG CO., LIMITED * Fluendo S.A. * FUNAI ELECTRIC CO., LTD
+* Guangdong Asano Technology CO.,Ltd.
* Hisense Broadband Multimedia Technologies Co.,Ltd. * Hisense International Co., Ltd. * Hisense Visual Technology Co., Ltd
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. | | **UEFI - Secure boot** | Not supported for migration.|
-| **Disk size** | up to 2 TB OS disk, 8 TB for the data disks.|
+| **Disk size** | up to 2 TB OS disk, 4 TB for the data disks.|
| **Disk number** | A maximum of 16 disks per VM.| | **Encrypted disks/volumes** | Not supported for migration.| | **RDM/passthrough disks** | Not supported for migration.|
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-migrate-vmware-powershell.md
Previously updated : 05/11/2021 Last updated : 08/20/2021
You can specify the replication properties as follows.
Disk Type | Mandatory | Specify the name of the load balancer to be created. Infrastructure redundancy | Optional | Specify infrastructure redundancy option as follows. <br/><br/> - **Availability Zone** to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. This option is only available if the target region selected for the migration supports Availability Zones. To use availability zones, specify the availability zone value for (`TargetAvailabilityZone`) parameter. <br/> - **Availability Set** to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets to use this option. To use availability set, specify the availability set ID for (`TargetAvailabilitySet`) parameter. Boot Diagnostic Storage Account | Optional | To use a boot diagnostic storage account, specify the ID for (`TargetBootDiagnosticStorageAccount`) parameter. <br/> - The storage account used for boot diagnostics should be in the same subscription that you're migrating your VMs to. <br/> - By default, no value is set for this parameter.
+ Tags | Optional | Add tags to your migrated virtual machines, disks, and NICs. <br/> Use (`Tag`) to add tags to virtual machines, disks, and NICs. <br/> or <br/> Use (`VMTag`) for adding tags to your migrated virtual machines.<br/> Use (`DiskTag`) for adding tags to disks. <br/> Use (`NicTag`) for adding tags to network interfaces. <br/> For example, add the required tags to a variable $tags and pass the variable in the required parameter. $tags = @{Organization=ΓÇ¥ContosoΓÇ¥}
Resource Group | Optional | IC configuration can be specified using the [New-AzM
Network Interface | Optional | Specify the name of the Azure VM to be created by using the [`TargetVMName`] parameter. Availability Zone | Optional | To use availability zones, specify the availability zone value for [`TargetAvailabilityZone`] parameter. Availability Set | Optional | To use availability set, specify the availability set ID for [`TargetAvailabilitySet`] parameter. -
+Tags | Optional | For updating tags, use the following parameters [`UpdateTag`] or [`UpdateVMTag`], [`UpdateDiskTag`], [`UpdateNicTag`], and type of update tag operation [`UpdateTagOperation`] or [`UpdateVMTagOperation`], [`UpdateDiskTagOperation`], [`UpdateNicTagOperation`]. The update tag operation takes the following values ΓÇô Merge, Delete, and Replace. <br/> Use [`UpdateTag`] to update all tags across virtual machines, disks, and NICs. <br/> Use [`UpdateVMTag`] for updating virtual machine tags. <br/> Use [`UpdateDiskTag`] for updating disk tags. <br/> Use [`UpdateNicTag`] for updating NIC tags. <br/> Use [`UpdateTagOperation`] to update the operation for all tags across virtual machines, disks, and NICs. <br/> Use [`UpdateVMTagOperation`] for updating virtual machine tags. <br/> Use [`UpdateDiskTagOperation`] for updating disk tags. <br/> Use [`UpdateNicTagOperation`] for updating NIC tags. <br/> <br/> The *replace* option replaces the entire set of existing tags with a new set. <br/> The *merge* option allows adding tags with new names and updating the values of tags with existing names. <br/> The *delete* option allows selectively deleting tags based on given names or name/value pairs.
+Disk(s) | Optional | For the OS disk: <br/> Update the name of the OS disk by using the [`TargetDiskName`] parameter. <br/><br/> For updating multiple disks: <br/> Use [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) to set the disk names to a variable *$DiskMapping* and then use the [`DiskToUpdate`] parameter and pass along the variable. <br/> <br/> **Note:** The disk ID to be used in [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) is the unique identifier (UUID) property for the disk retrieved using theΓÇ»[Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
+NIC(s) name | Optional | Use [New-AzMigrateNicMapping](/powershell/module/az.migrate/new-azmigratenicmapping) to set the NIC names to a variable *$NICMapping* and then use the [`NICToUpdate`] parameter and pass the variable.
The [Get-AzMigrateServerReplication](/powershell/module/az.migrate/get-azmigrateserverreplication) cmdlet returns a job which can be tracked for monitoring the status of the operation.
$ReplicatingServer = Get-AzMigrateServerReplication -TargetObjectID $Replicating
Write-Output $ReplicatingServer.ProviderSpecificDetail.VMNic ```
-In the following example, we'll update the NIC configuration by making the first NIC as primary and assigning a static IP to it. we'll discard the second NIC for migration and update the target VM name and size.
+In the following example, we'll update the NIC configuration by making the first NIC as primary and assigning a static IP to it. we'll discard the second NIC for migration, update the target VM name & size, and customizing NIC names.
```azurepowershell-interactive # Specify the NIC properties to be updated for a replicating VM. $NicMapping = @()
-$NicMapping1 = New-AzMigrateNicMapping -NicId $ReplicatingServer.ProviderSpecificDetail.VMNic[0].NicId -TargetNicIP ###.###.###.### -TargetNicSelectionType Primary
-$NicMapping2 = New-AzMigrateNicMapping -NicId $ReplicatingServer.ProviderSpecificDetail.VMNic[1].NicId -TargetNicSelectionType DoNotCreate
+$NicMapping1 = New-AzMigrateNicMapping -NicId $ReplicatingServer.ProviderSpecificDetail.VMNic[0].NicId -TargetNicIP ###.###.###.### -TargetNicSelectionType Primary TargetNicNameΓÇ»"ContosoNic_1"
+$NicMapping2 = New-AzMigrateNicMapping -NicId $ReplicatingServer.ProviderSpecificDetail.VMNic[1].NicId -TargetNicSelectionType DoNotCreate - TargetNicNameΓÇ»"ContosoNic_2"
$NicMapping += $NicMapping1 $NicMapping += $NicMapping2
$NicMapping += $NicMapping2
# Update the name, size and NIC configuration of a replicating server $UpdateJob = Set-AzMigrateServerReplication -InputObject $ReplicatingServer -TargetVMSize Standard_DS13_v2 -TargetVMName MyMigratedVM -NicToUpdate $NicMapping ```+
+In the following example, we'll customize the disk name.
+
+```azurepowershell-interactive
+# Customize the Disk names for a replicating VM
+$OSDisk = Set-AzMigrateDiskMapping -DiskID "6000C294-1217-dec3-bc18-81f117220424" -DiskName "ContosoDisk_1"
+$DataDisk1= Set-AzMigrateDiskMapping -DiskID "6000C292-79b9-bbdc-fb8a-f1fa8dbeff84" -DiskName "ContosoDisk_2"
+$DiskMapping = $OSDisk, $DataDisk1
+```
+
+```azurepowershell-interactive
+# Update the disk names for a replicating server
+$UpdateJob = Set-AzMigrateServerReplication InputObject $ReplicatingServer DiskToUpdate $DiskMapping
+ ```
+
+In the following example, we'll add tags to the replicating VMs.
+
+```azurepowershell-interactive
+# Update all tags across virtual machines, disks, and NICs.
+Set-azmigrateserverreplication UpdateTag $UpdateTag UpdateTagOperation Merge/Replace/Delete
+
+# Update virtual machines tags
+Set-azmigrateserverreplication UpdateVMTag $UpdateVMTag UpdateVMTagOperation Merge/Replace/Delete
+```
+Use the following example to track the job status
+ ```azurepowershell-interactive # Track job status to check for completion while (($UpdateJob.State -eq 'InProgress') -or ($UpdateJob.State -eq 'NotStarted')){
while (($UpdateJob.State -eq 'InProgress') -or ($UpdateJob.State -eq 'NotStarted
Write-Output $UpdateJob.State ``` -- ## 11. Run a test migration When delta replication begins, you can run a test migration for the VMs before running a full migration to Azure. We highly recommend that you do test migration at least once for each machine before you migrate it. The cmdlet returns a job which can be tracked for monitoring the status of the operation.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen2.md
When you choose **Managed Identity**, to set up the connection, you must first g
When authentication method selected is **Account Key**, you need to get your access key and store in the key vault:
-1. Navigate to your ADLS Gne2 storage account
+1. Navigate to your ADLS Gen2 storage account
1. Select **Security + networking > Access keys** 1. Copy your *key* and save it somewhere for the next steps 1. Navigate to your key vault
To create and run a new scan, do the following:
## Next steps - [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)-- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
remote-rendering Graphics Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/concepts/graphics-bindings.md
Once set up, the graphics binding gives access to various functions that affect
In Unity, the entire binding is handled by the `RemoteUnityClientInit` struct passed into `RemoteManagerUnity.InitializeManager`. To set the graphics mode, the `GraphicsApiType` field has to be set to the chosen binding. The field will be automatically populated depending on whether an XRDevice is present. The behavior can be manually overridden with the following behaviors:
-* **HoloLens 2**: the [Windows Mixed Reality](#windows-mixed-reality) graphics binding is always used.
+* **HoloLens 2**: the [OpenXR](#openxr) or the [Windows Mixed Reality](#windows-mixed-reality) graphics binding is used depending on the active Unity XR plugin.
* **Flat UWP desktop app**: [Simulation](#simulation) is always used. * **Unity editor**: [Simulation](#simulation) is always used unless a WMR VR headset is connected in which case ARR will be disabled to allow to debug the non-ARR related parts of the application. See also [holographic remoting](../how-tos/unity/holographic-remoting.md).
To select a graphics binding, take the following two steps: First, the graphics
```cs RemoteRenderingInitialization managerInit = new RemoteRenderingInitialization();
-managerInit.GraphicsApi = GraphicsApiType.WmrD3D11;
+managerInit.GraphicsApi = GraphicsApiType.OpenXrD3D11;
managerInit.ConnectionType = ConnectionType.General; managerInit.Right = ///... RemoteManagerStatic.StartupRemoteRendering(managerInit);
RemoteManagerStatic.StartupRemoteRendering(managerInit);
```cpp RemoteRenderingInitialization managerInit;
-managerInit.GraphicsApi = GraphicsApiType::WmrD3D11;
+managerInit.GraphicsApi = GraphicsApiType::OpenXrD3D11;
managerInit.ConnectionType = ConnectionType::General; managerInit.Right = ///... StartupRemoteRendering(managerInit); // static function in namespace Microsoft::Azure::RemoteRendering ```-
-The call above is necessary to initialize Azure Remote Rendering into the holographic APIs. This function must be called before any holographic API is called and before any other Remote Rendering APIs are accessed. Similarly, the corresponding de-init function `RemoteManagerStatic.ShutdownRemoteRendering();` should be called after no holographic APIs are being called anymore.
+The call above must be called before any other Remote Rendering APIs are accessed.
+Similarly, the corresponding de-init function `RemoteManagerStatic.ShutdownRemoteRendering();` should be called after all other Remote Rendering objects are already destoyed.
+For WMR `StartupRemoteRendering` also needs to be called before any holographic API is called. For OpenXR the same applies for any OpenXR related APIs.
## <span id="access">Accessing graphics binding
if (ApiHandle<GraphicsBinding> binding = currentSession->GetGraphicsBinding())
## Graphic APIs
-There are currently two graphics APIs that can be selected, `WmrD3D11` and `SimD3D11`. A third one `Headless` exists but is not yet supported on the client side.
+There are currently three graphics APIs that can be selected, `OpenXrD3D11`, `WmrD3D11` and `SimD3D11`. A fourth one `Headless` exists but is not yet supported on the client side.
+
+### OpenXR
+
+`GraphicsApiType.OpenXrD3D11` is the default binding to run on HoloLens 2. It will create the `GraphicsBindingOpenXrD3d11` binding. In this mode Azure Remote Rendering creates a OpenXR API layer to integrate itself into the OpenXR runtime.
+
+To access the derived graphics bindings, the base `GraphicsBinding` has to be cast.
+There are three things that need to be done to use the OpenXR binding:
+
+#### Package custom OpenXR layer json
+
+To use Remote Rendering with OpenXR the custom OpenXR API layer needs to be activated. This is done by calling `StartupRemoteRendering` mentioned in the previous section. However, as a prerequisite the `XrApiLayer_msft_holographic_remoting.json` needs to be packaged with the application so it can be loaded. This is done automatically if the **"Microsoft.Azure.RemoteRendering.Cpp"** NuGet package is added to a project.
+
+#### Inform Remote Rendering of the used XR Space
+
+This is needed to align remote and locally rendered content.
+
+```cs
+RenderingSession currentSession = ...;
+ulong space = ...; // XrSpace cast to ulong
+GraphicsBindingOpenXrD3d11 openXrBinding = (currentSession.GraphicsBinding as GraphicsBindingOpenXrD3d11);
+if (openXrBinding.UpdateAppSpace(space) == Result.Success)
+{
+ ...
+}
+```
+
+```cpp
+ApiHandle<RenderingSession> currentSession = ...;
+XrSpace space = ...;
+ApiHandle<GraphicsBindingOpenXrD3d11> openXrBinding = currentSession->GetGraphicsBinding().as<GraphicsBindingOpenXrD3d11>();
+#ifdef _M_ARM64
+ if (openXrBinding->UpdateAppSpace(reinterpret_cast<uint64_t>(space)) == Result::Success)
+#else
+ if (openXrBinding->UpdateAppSpace(space) == Result::Success)
+#endif
+{
+ ...
+}
+```
+
+Where the above `XrSpace` is the one used by the application that defines the world space coordinate system in which coordinates in the API are expressed in.
+
+#### Render remote image (OpenXR)
+
+At the start of each frame, the remote frame needs to be rendered into the back buffer. This is done by calling `BlitRemoteFrame`, which will fill both color and depth information for both eyes into the currently bound render target. Thus it is important to do so after binding the full back buffer as a render target.
+
+> [!WARNING]
+> After the remote image was blit into the backbuffer, the local content should be rendered using a single-pass stereo rendering technique, e.g. using **SV_RenderTargetArrayIndex**. Using other stereo rendering techniques, such as rendering each eye in a separate pass, can result in major performance degradation or graphical artifacts and should be avoided.
+
+```cs
+RenderingSession currentSession = ...;
+GraphicsBindingOpenXrD3d11 openXrBinding = (currentSession.GraphicsBinding as GraphicsBindingOpenXrD3d11);
+openXrBinding.BlitRemoteFrame();
+```
+
+```cpp
+ApiHandle<RenderingSession> currentSession = ...;
+ApiHandle<GraphicsBindingOpenXrD3d11> openXrBinding = currentSession->GetGraphicsBinding().as<GraphicsBindingOpenXrD3d11>();
+openXrBinding->BlitRemoteFrame();
+```
### Windows Mixed Reality
-`GraphicsApiType.WmrD3D11` is the default binding to run on HoloLens 2. It will create the `GraphicsBindingWmrD3d11` binding. In this mode Azure Remote Rendering hooks directly into the holographic APIs.
+`GraphicsApiType.WmrD3D11` is the previously used graphics binding to run on HoloLens 2. It will create the `GraphicsBindingWmrD3d11` binding. In this mode Azure Remote Rendering hooks directly into the holographic APIs.
To access the derived graphics bindings, the base `GraphicsBinding` has to be cast. There are two things that need to be done to use the WMR binding: #### Inform Remote Rendering of the used coordinate system
+This is needed to align remote and locally rendered content.
+ ```cs RenderingSession currentSession = ...; IntPtr ptr = ...; // native pointer to ISpatialCoordinateSystem
void* ptr = ...; // native pointer to ISpatialCoordinateSystem
ApiHandle<GraphicsBindingWmrD3d11> wmrBinding = currentSession->GetGraphicsBinding().as<GraphicsBindingWmrD3d11>(); if (wmrBinding->UpdateUserCoordinateSystem(ptr) == Result::Success) {
- //...
+ ...
} ``` Where the above `ptr` must be a pointer to a native `ABI::Windows::Perception::Spatial::ISpatialCoordinateSystem` object that defines the world space coordinate system in which coordinates in the API are expressed in.
-#### Render remote image
-
-At the start of each frame, the remote frame needs to be rendered into the back buffer. This is done by calling `BlitRemoteFrame`, which will fill both color and depth information for both eyes into the currently bound render target. Thus it is important to do so after binding the full back buffer as a render target.
+#### Render remote image (WMR)
-> [!WARNING]
-> After the remote image was blit into the backbuffer, the local content should be rendered using a single-pass stereo rendering technique, e.g. using **SV_RenderTargetArrayIndex**. Using other stereo rendering techniques, such as rendering each eye in a separate pass, can result in major performance degradation or graphical artifacts and should be avoided.
+The same considerations as in the OpenXR case above apply here. The API calls look like this:
```cs RenderingSession currentSession = ...;
remote-rendering Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/resources/troubleshoot.md
Coplanar surfaces can have a number of different causes:
## Graphics artifacts using multi-pass stereo rendering in native C++ apps
-In some cases, custom native C++ apps that use a multi-pass stereo rendering mode for local content (rendering to the left and right eye in separate passes) after calling [**BlitRemoteFrame**](../concepts/graphics-bindings.md#render-remote-image) can trigger a driver bug. The bug results in non-deterministic rasterization glitches, causing individual triangles or parts of triangles of the local content to randomly disappear. For performance reasons, it is recommended anyway to render local content with a more modern single-pass stereo rendering technique, for example using **SV_RenderTargetArrayIndex**.
+In some cases, custom native C++ apps that use a multi-pass stereo rendering mode for local content (rendering to the left and right eye in separate passes) after calling [**BlitRemoteFrame**](../concepts/graphics-bindings.md#render-remote-image-openxr) can trigger a driver bug. The bug results in non-deterministic rasterization glitches, causing individual triangles or parts of triangles of the local content to randomly disappear. For performance reasons, it is recommended anyway to render local content with a more modern single-pass stereo rendering technique, for example using **SV_RenderTargetArrayIndex**.
## Conversion File Download Errors
remote-rendering Integrate Remote Rendering Into Holographic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/native-cpp/hololens/integrate-remote-rendering-into-holographic-app.md
void HolographicAppMain::StartModelLoading()
[this](RR::Status status, RR::ApiHandle<RR::LoadModelResult> result) { m_modelLoadResult = RR::StatusToResult(status);
- m_modelLoadFinished = true; // successful if m_modelLoadResult==RR::Result::Success
- char buffer[1024];
- sprintf_s(buffer, "Remote Rendering: Model loading completed. Result: %s\n", RR::ResultToString(m_modelLoadResult));
- OutputDebugStringA(buffer);
+ m_modelLoadFinished = true;
+
+ if (m_modelLoadResult == RR::Result::Success)
+ {
+ RR::Double3 pos = { 0.0, 0.0, -2.0 };
+ result->GetRoot()->SetPosition(pos);
+ }
}, // progress update callback [this](float progress)
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/quickstart-configure-route-server-powershell.md
$virtualnetwork | Set-AzVirtualNetwork
```azurepowershell-interactive $rs = @{
- RouterServerName = 'myRouteServer'
+ RouteServerName = 'myRouteServer'
ResourceGroupName = 'myRouteServerRG' Location = 'WestUS' HostedSubnet = $subnetConfig.Id
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-intro.md
Custom skills can support more complex scenarios, such as recognizing forms, or
## Enrichment steps <a name="enrichment-steps"></a>
-An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). A skillset defines the enrichment steps, and the indexer drives the skillset. When configuring an indexer, you can include properties like output field mappings that send enriched content to a [search index](search-what-is-an-index.md) or a [knowledge store](knowledge-store-concept-intro.md).
+An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). A skillset defines the enrichment steps, and the indexer drives the skillset. When configuring an indexer, you can include properties like output field mappings that send enriched content to a [search index](search-what-is-an-index.md) or projections that define data structures in a [knowledge store](knowledge-store-concept-intro.md).
Post-indexing, you can access content via search requests through all [query types supported by Azure Cognitive Search](search-query-overview.md).
To iterate over the above steps, [reset the indexer](search-howto-reindex.md) be
+ [Quickstart: Try AI enrichment in a portal walk-through](cognitive-search-quickstart-blob.md) + [Tutorial: Learn about the AI enrichment REST APIs](cognitive-search-tutorial-blob.md)
-+ [Knowledge store](knowledge-store-concept-intro.md)
-+ [Create a knowledge store in REST](knowledge-store-create-rest.md)
++ [Skillset concepts](cognitive-search-working-with-skillsets.md)++ [Knowledge store concepts](knowledge-store-concept-intro.md)++ [Create a skillset](cognitive-search-defining-skillset.md)++ [Create a knowledge store](knowledge-store-create-rest.md)
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
The following example shows the results of an entity recognition skill that dete
:::image type="content" source="media/cognitive-search-defining-skillset/doc-in-search-explorer.png" alt-text="Screenshot of a document in Search Explorer.":::
+## Tips for a first skillset
+++ Assemble a representative sample of your content in Blob Storage or another supported indexer data source and run the **Import data** wizard to create the skillset, index, indexer, and data source object. +
+ The wizard automates several steps that can be challenging the first time around, including defining the fields in an index, defining output filed mappings in an indexer, and projections in a knowledge store if you are using one. For some skills, such as OCR or image analysis, the wizard will add utility skills that merge image and text content that was separated during document cracking.
+++ Alternatively, you can import skill Postman collections that provide full examples of all of the object definitions required to evaluate a skill, from skillset to an index that you can query to view the results of a transformation.+ ## Next steps Context and input source fields are paths to nodes in an enrichment tree. As a next step, learn more about the syntax for setting up paths to nodes in an enrichment tree.
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-debug-sessions.md
Before you begin, have the following prerequisites in place:
+ [Postman desktop app](https://www.getpostman.com/) and a [Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/Debug-sessions) to create objects using the REST APIs.
-+ [Sample data (clinical trials)](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials-pdf-19).
++ [Sample data (clinical trials)](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials/clinical-trials-pdf-19). > [!NOTE] > This quickstart also uses [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
Before you begin, have the following prerequisites in place:
This section creates the sample data set in Azure Blob Storage so that the indexer and skillset have content to work with.
-1. [Download sample data (clinical-trials-pdf-19)](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials-pdf-19), consisting of 19 files.
+1. [Download sample data (clinical-trials-pdf-19)](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials/clinical-trials-pdf-19), consisting of 19 files.
1. [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-complex-data-types.md
Complex fields represent either a single object in the document, or an array of
Azure Cognitive Search natively supports complex types and collections. These types allow you to model almost any JSON structure in an Azure Cognitive Search index. In previous versions of Azure Cognitive Search APIs, only flattened row sets could be imported. In the newest version, your index can now more closely correspond to source data. In other words, if your source data has complex types, your index can have complex types also.
-To get started, we recommend the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-dat), which you can load in the **Import data** wizard in the Azure portal. The wizard detects complex types in the source and suggests an index schema based on the detected structures.
+To get started, we recommend the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels), which you can load in the **Import data** wizard in the Azure portal. The wizard detects complex types in the source and suggests an index schema based on the detected structures.
> [!Note] > Support for complex types became generally available starting in `api-version=2019-05-06`.
As with top-level simple fields, simple sub-fields of complex fields can only be
## Next steps
-Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-dat) in the **Import data** wizard. You'll need the Cosmos DB connection information provided in the readme to access the data.
+Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You'll need the Cosmos DB connection information provided in the readme to access the data.
With that information in hand, your first step in the wizard is to create a new Azure Cosmos DB data source. Further on in the wizard, when you get to the target index page, you'll see an index with complex types. Create and load this index, and then execute queries to understand the new structure.
search Search Howto Index Json Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-json-blobs.md
api-key: [admin key]
### json example (single hotel JSON files)
-The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
+The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
The data set consists of five blobs, each containing a hotel document with an address collection and a rooms collection. The blob indexer detects both collections and reflects the structure of the input documents in the index schema.
api-key: [admin key]
### jsonArrays example (clinical trials sample data)
-The [clinical trials JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials-json) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
+The [clinical trials JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials/clinical-trials-json) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
The data set consists of eight blobs, each containing a JSON array of entities, for a total of 100 entities. The entities vary as to which fields are populated, but the end result is one search document per entity, from all arrays, in all blobs.
api-key: [admin key]
} ```
-### jsonLines example (caselaw sample data)
-
-The [caselaw JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/caselaw) on GitHub is helpful for testing JSON new line parsing. As with other samples, you can upload this data to Blob storage and use the **Import data** wizard to quickly evaluate the impact of parsing mode on individual blobs.
-
-The data set consists of one blob containing 10 JSON entities separate by a new line, where each entity describes a single legal case. The end result is one search document per entity.
- ## Map JSON fields to search fields Field mappings are used to associate a source field with a destination field in situations where the field names and types are not identical. But field mappings can also be used to match parts of a JSON document and "lift" them into top-level fields of the search document.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/threat-intelligence-integration.md
To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions
### ThreatQuotient Threat Intelligence Platform -- See [Microsoft Sentinel Connector for ThreatQ integration](https://appsource.microsoft.com/product/web-apps/threatquotientinc1595345895602.microsoft-sentinel-connector-threatq?src=health&tab=DetailsAndSupport) for support information and instructions to connect [ThreatQuotient TIP](https://www.threatq.com/) to Azure Sentinel.
+- See [Microsoft Sentinel Connector for ThreatQ integration](https://azuremarketplace.microsoft.com/marketplace/apps/threatquotientinc1595345895602.microsoft-sentinel-connector-threatq?tab=overview) for support information and instructions to connect [ThreatQuotient TIP](https://www.threatq.com/) to Azure Sentinel.
## Incident enrichment sources
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-performance-improvements.md
Title: Best practices for improving performance using Azure Service Bus description: Describes how to use Service Bus to optimize performance when exchanging brokered messages. Previously updated : 03/09/2021- Last updated : 08/30/2021 # Best Practices for performance improvements using Service Bus Messaging
You can also utilize Azure Monitor to [automatically scale the Service Bus names
### Sharding across namespaces
-While scaling up Compute (Messaging Units) allocated to the namespace is an easier solution, it **may not** provide a linear increase in the throughput. This is because of Service Bus internals (storage, network, etc.) which may be limiting the throughput.
+While scaling up Compute (Messaging Units) allocated to the namespace is an easier solution, it **may not** provide a linear increase in the throughput. This is because of Service Bus internals (storage, network, etc.), which may be limiting the throughput.
The cleaner solution in this case is to shard your entities (queues, and topics) across different Service Bus Premium namespaces. You may also consider sharding across different namespaces in different Azure regions.
Service Bus client objects, such as `QueueClient` or `MessageSender`, are create
The following note applies to all SDKs: > [!NOTE]
-> Establishing a connection is an expensive operation that you can avoid by reusing the same factory and client objects for multiple operations. You can safely use these client objects for concurrent asynchronous operations and from multiple threads.
+> Establishing a connection is an expensive operation that you can avoid by reusing the same factory or client objects for multiple operations. You can safely use these client objects for concurrent asynchronous operations and from multiple threads.
## Concurrent operations Operations such as send, receive, delete, and so on, take some time. This time includes the time that the Service Bus service takes to process the operation and the latency of the request and the response. To increase the number of operations per time, operations must execute concurrently.
site-recovery Site Recovery Vmware Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-vmware-deployment-planner-cost-estimation.md
The total DR cost is categorized based on two different states - replication and
**Replication cost**: The cost incurs at the time of replication. It covers the cost of storage, network, and Azure Site Recovery license. **DR-Drill cost**: The cost incurs at the time of DR drills. Azure Site Recovery spins up VMs during DR drills. The DR drill cost covers compute and storage cost of the running VMs.
-Total DR drill duration in a year = Number of DR drills x Each DR drill duration (days)
-Average DR drill cost (per month) = Total DR drill cost / 12
+
+1. Total DR drill duration in a year = Number of DR drills x Each DR drill duration (days)
+
+2. Average DR drill cost (per month) = Total DR drill cost / 12
### Storage cost table: This table shows premium and standard storage cost incur for replication and DR drills with and without discount.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server ova** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 56](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 9.43.6040.1 | 5.1.6853.0 | 9.43.6040.1| 5.1.6853.0 | 2.0.9226.0
+[Rollup 57](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 9.44.6068.1 | 5.1.6899.0 | 9.44.6068.1 | 5.1.6899.0 | 2.0.9236.0
+[Rollup 56](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 9.43.6040.1 | 5.1.6853.0 | 9.43.6040.1| 5.1.6853.0 | 2.0.9226.0
[Rollup 55](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 9.42.5941.1 | 5.1.6692.0 | 9.42.5941.1 | 5.1.6692.0 | 2.0.9208.0 [Rollup 54](https://support.microsoft.com/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 9.41.5888.1 | 5.1.6620.0 | 9.41.5888.1 | 5.1.6620.0 | 2.0.9202.0 [Rollup 53](https://support.microsoft.com/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 9.40.5850.1 | 5.1.6537.0 | 9.40.5850.1 | 5.1.6537.0 | 2.0.9202.0 [Rollup 52](https://support.microsoft.com/help/4597409/) | 9.39.5796.1 | 5.1.6458.0 | 9.39.5796.1 | 5.1.6458.0 | 2.0.9196.0
-[Rollup 51](https://support.microsoft.com/help/4590304) | 9.38.5761.1 | 5.1.6400.0 | 9.38.5761.1 | 5.1.6400.0 | 2.0.9193.0
-[Rollup 50](https://support.microsoft.com/help/4582666/) | 9.37.5724.1 | 5.1.6347.0 | 9.37.5724.1 | 5.1.6347.0 | 2.0.9192.0
-[Rollup 49](https://support.microsoft.com/help/4578241/) | 9.36.5696.1 | 5.1.6315.0 | 9.36.5696.1 | 5.1.6315.0 | 2.0.9188.0
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (August 2021)
+
+### Update Rollup 57
+
+[Update rollup 57](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) provides the following updates:
+
+> [!NOTE]
+> Update rollup only provides updates for the public preview of VMware to Azure protections. No other fixes or improvements have been covered in this release.
+> To setup the preview experience, you will have to perform a fresh setup and use a new Recovery Services vault. Updating from existing architecture to new architecture is unsupported.
+
+This public preview covers a complete overhaul of the current architecture for pretecting VMware machines.
+- [Learn](https://docs.microsoft.com/azure/site-recovery/vmware-azure-architecture-preview) about the new architecture and the changes introduced.
+- Check the pre-requisites and setup the ASR replication appliance by following [these steps](https://docs.microsoft.com/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview).
+- [Enable replication](https://docs.microsoft.com/azure/site-recovery/vmware-azure-set-up-replication-tutorial-preview) for your VMware machines.
+- Check out the [automatic upgrade](https://docs.microsoft.com/azure/site-recovery/upgrade-mobility-service-preview) and [switch](https://docs.microsoft.com/azure/site-recovery/switch-replication-appliance-preview) capability for ASR replication appliance.
++
+### Update rollup 56
+
+[Update rollup 56](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+
+**Azure Site Recovery Service** | Made improvements so that enabling replication and re-protect operations are faster by 46%.
+**Azure Site Recovery Portal** | Replication can now be enabled between any two Azure regions around the world. You are no longer limited to enabling replication within your continent.
++ ## Updates (July 2021) ### Update rollup 56
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.4.0-21-generic to 4.4.0-197-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-128-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1102-azure | 16.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.4.0-21-generic to 4.4.0-194-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-123-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1098-azure| |||
-18.04 LTS | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic |
+18.04 LTS | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic |
18.04 LTS |[9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic | 18.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.15.0-20-generic to 4.15.0-135-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-70-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 5.4.0-60-generic to 5.4.0-65-generic </br> 4.15.0-1009-azure to 4.15.0-1106-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1039-azure| 18.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.15.0-20-generic to 4.15.0-129-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 4.15.0-1009-azure to 4.15.0-1103-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1035-azure|
storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/reference.md
The following list contains links to libraries for other programming languages a
## Azure CLI
-[Azure CLI reference](/cli/azure/storage)
+[Azure CLI reference](/cli/azure/azure-cli-reference-for-storage)
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-troubleshooting.md
Part 1: Install and Configure Fiddler
14. Click "Copy to File…" 15. In the export wizard choose the following options - Base-64 encoded X.509
- - For file name, Browse… to C:\Users\<your user dir>\AppData\Roaming\StorageExplorer\certs, and then you can save it as any file name
+ - For file name, Browse… to `C:\Users\<your user dir>\AppData\Roaming\StorageExplorer\certs` and then you can save it as any file name
16. Close the certificate window 17. Start Storage Explorer 18. Go to Edit > Configure Proxy
If none of these solutions work for you, you can:
- Create a support ticket - [Open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues). You can also do this by selecting the **Report issue to GitHub** button in the lower-left corner.
-![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
+![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
storsimple Storsimple Configure Mpio On Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-configure-mpio-on-linux.md
This load-balancing algorithm uses all the available multipaths to the active co
Login to [iface: eth1, target: iqn.1991-05.com.microsoft:storsimple8100-shx0991003g00dv-target, portal: 10.126.162.26,3260] successful. ```
- If you see only one host interface and two paths here, then you need to enable both the interfaces on host for iSCSI. You can follow the [detailed instructions in Linux documentation](https://access.redhat.com/documentation/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/iscsioffloadmain.html).
+ If you see only one host interface and two paths here, then you need to enable both the interfaces on host for iSCSI. You can follow the [detailed instructions in Linux documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/ifacesetup-iscsioffload).
1. A volume is exposed to the CentOS server from the StorSimple device. For more information, see [Step 6: Create a volume](storsimple-8000-deployment-walkthrough-u2.md#step-6-create-a-volume) via the Azure portal on your StorSimple device.
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/workspaces-encryption.md
Title: Azure Synapse Analytics encryption description: An article that explains encryption in Azure Synapse Analytics-+ Previously updated : 07/14/2021- Last updated : 07/20/2021+ ++ # Encryption for Azure Synapse Analytics workspaces
The first layer of encryption for Azure services is enabled with platform-manage
## Azure Synapse encryption
-This section will help you better understand how customer-managed key encryption is enabled and enforced in Synapse workspaces. This encryption uses existing keys or new keys generated in Azure Key Vault. A single key is used to encrypt all the data in a workspace. Synapse workspaces support RSA keys with 2048 and 3072 byte-sized keys.
+This section will help you better understand how customer-managed key encryption is enabled and enforced in Synapse workspaces. This encryption uses existing keys or new keys generated in Azure Key Vault. A single key is used to encrypt all the data in a workspace. Synapse workspaces support RSA 2048 and 3072 byte-sized keys, as well as RSA-HSM keys.
> [!NOTE] > Synapse workspaces do not support the use of EC, EC-HSM, RSA-HSM, and oct-HSM keys for encryption.
The data in the following Synapse components is encrypted with the customer-mana
## Workspace encryption configuration
-Workspaces can be configured to enable double encryption with a customer-managed key at the time of workspace creation. Select the "Enable double encryption using a customer-managed key" option on the "Security" tab when creating your new workspace. You can choose to enter a key identifier URI or select from a list of key vaults in the **same region** as the workspace. The Key Vault itself needs to have **purge protection enabled**.
+Workspaces can be configured to enable double encryption with a customer-managed key at the time of workspace creation. Enable double encryption using a customer-managed key on the "Security" tab when creating your new workspace. You can choose to enter a key identifier URI or select from a list of key vaults in the **same region** as the workspace. The Key Vault itself needs to have **purge protection enabled**.
> [!IMPORTANT] > The configuration setting for double encryption cannot be changed after the workspace is created.
Workspaces can be configured to enable double encryption with a customer-managed
### Key access and workspace activation
-The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or [Azure Key Vault RBAC access](../../key-vault/general/rbac-guide.md). When granting permissions via an Azure Key Vault access policy, choose the ["Application-only"](../../key-vault/general/security-features.md#key-vault-authentication-options) option during policy creation (select the workspace's managed identity and do not add it as an authorized application).
+The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or [Azure Key Vault RBAC access](../../key-vault/general/rbac-guide.md). When granting permissions via an Azure Key Vault access policy, choose the ["Application-only"](../../key-vault/general/security-features.md#key-vault-authentication-options) option during policy creation (select the workspaces managed identity and do not add it as an authorized application).
The workspace managed identity must be granted the permissions it needs on the key vault before the workspace can be activated. This phased approach to workspace activation ensures that data in the workspace is encrypted with the customer-managed key. Note that encryption can be enabled or disabled for dedicated SQL Pools- each pool is not enabled for encryption by default.
+#### Using a User-assigned Managed identity
+Workspaces can be configured to use a [User-assigned Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to access your customer-managed key stored in Azure Key Vault. Configure a User-assigned Managed identity to avoid phased activation of your Azure Synapse workspace when using double encryption with customer managed keys. The Managed Identity Contributor built-in role is required to assign a user-assigned managed identity to an Azure Synapse workspace.
+> [!NOTE]
+> A User-assigned Managed Identity cannot be configured to access customer-managed key when Azure Key Vault is behind a firewall.
+++ #### Permissions
-To encrypt or decrypt data at rest, the workspace managed identity must have the following permissions:
+To encrypt or decrypt data at rest, the managed identity must have the following permissions:
* WrapKey (to insert a key into Key Vault when creating a new key). * UnwrapKey (to get the key for decryption). * Get (to read the public part of a key) #### Workspace activation
-After your workspace (with double encryption enabled) is created, it will remain in a "Pending" state until activation succeeds. The workspace must be activated before you can fully use all functionality. For example, you can only create a new dedicated SQL pool once activation succeeds. Grant the workspace managed identity access to the key vault and click on the activation link in the workspace Azure portal banner. Once the activation completes successfully, your workspace is ready to use with the assurance that all data in it is protected with your customer-managed key. As previously noted, the key vault must have purge protection enabled for activation to succeed.
+If you do not configure a user-assigned managed identity to access customer managed keys during workspace creation, your workspace will remain in a "Pending" state until activation succeeds. The workspace must be activated before you can fully use all functionality. For example, you can only create a new dedicated SQL pool once activation succeeds. Grant the workspace managed identity access to the key vault and click on the activation link in the workspace Azure portal banner. Once the activation completes successfully, your workspace is ready to use with the assurance that all data in it is protected with your customer-managed key. As previously noted, the key vault must have purge protection enabled for activation to succeed.
:::image type="content" source="./media/workspaces-encryption/workspace-activation.png" alt-text="This diagram shows the banner with the activation link for the workspace." lightbox="./media/workspaces-encryption/workspace-activation.png":::
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values. When a parameters cell isn't designated, the injected cell will be inserted at the top of the notebook.
+Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
### Assign parameters values from a pipeline
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-notebook-activity.md
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and uses the values as defaults for the parameters passed in at execution time