Updates from: 09/29/2022 01:10:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-ios-app.md
Update the following class members:
## Step 6: Run and test the mobile app
-1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-the-simulator-or-on-a-device).
+1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-simulator-or-on-a-device).
1. Select **Sign In**, and then sign up or sign in with your Azure AD B2C local or social account.
active-directory-b2c Enable Authentication Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app.md
To learn how to configure your iOS Swift app, see [Configure authentication in a
## Step 5: Run and test the mobile app
-1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-the-simulator-or-on-a-device).
+1. Build and run the project with a [simulator of a connected iOS device](https://developer.apple.com/documentation/xcode/running-your-app-in-simulator-or-on-a-device).
1. Select **Sign In**, and then sign up or sign in with your Azure AD B2C local or social account. 1. After you've authenticated successfully, you'll see your display name in the navigation bar.
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Combined registration supports the authentication methods and actions in the fol
| Hardware token | No | No | Yes | | Phone | Yes | Yes | Yes | | Alternate phone | Yes | Yes | Yes |
-| Office phone | Yes | Yes | Yes |
+| Office phone* | Yes | Yes | Yes |
| Email | Yes | Yes | Yes | | Security questions | Yes | No | Yes |
-| App passwords | Yes | No | Yes |
-| FIDO2 security keys<br />*Managed mode only from the [Security info](https://mysignins.microsoft.com/security-info) page*| Yes | Yes | Yes |
+| App passwords* | Yes | No | Yes |
+| FIDO2 security keys*| Yes | Yes | Yes |
> [!NOTE]
-> App passwords are available only to users who have been enforced for Azure AD Multi-Factor Authentication. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy.
+> <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />
+> <b>App passwords</b> are available only to users who have been enforced for Azure AD Multi-Factor Authentication. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br />
+> <b>FIDO2 security keys</b>, can only be added in *Managed mode only from the [Security info](https://mysignins.microsoft.com/security-info) page*
Users can set one of the following options as the default multifactor authentication method.
A user who hasn't yet set up all required security info goes to [https://myaccou
### Set up other methods after partial registration
-If a user has partially satisfied MFA or SSPR registration due to existing authentication method registrations performed by the user or admin, users will only be asked to register additional information allowed by the Authentication methods policy. If more than one other authentication method is available for the user to choose and register, an option on the registration experience titled **I want to set up another method** will be shown and allow the user to set up their desired authentication method.
+If a user has partially satisfied MFA or SSPR registration due to existing authentication method registrations performed by the user or admin, users will only be asked to register additional information allowed by the Authentication methods policy settings when registration is required. If more than one other authentication method is available for the user to choose and register, an option on the registration experience titled **I want to set up another method** will be shown and allow the user to set up their desired authentication method.
:::image type="content" border="true" source="./media/concept-registration-mfa-sspr-combined/other-method.png" alt-text="Screenshot of how to set up another method." :::
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
Previously updated : 09/08/2022 Last updated : 09/27/2022
Azure Active Directory Connect cloud sync can synchronize Azure AD password chan
- An account with: - [Global Administrator](../roles/permissions-reference.md#global-administrator) role - Azure AD configured for self-service password reset. If needed, complete this tutorial to enable Azure AD SSPR. -- An on-premises AD DS environment configured with [Azure AD Connect cloud sync version 1.1.972.0 or later](../app-provisioning/provisioning-agent-release-version-history.md). Learn how to [identify the agent's current version](../cloud-sync/how-to-automatic-upgrade.md). If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md).
+- An on-premises AD DS environment configured with [Azure AD Connect cloud sync version 1.1.977.0 or later](../app-provisioning/provisioning-agent-release-version-history.md). Learn how to [identify the agent's current version](../cloud-sync/how-to-automatic-upgrade.md). If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md).
## Deployment steps
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Previously updated : 09/09/2021 Last updated : 09/26/2022 # Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application (SPA)
-In this tutorial, you build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow. The SPA you build uses the Microsoft Authentication Library (MSAL) for JavaScript v1.0.
+In this tutorial, build a JavaScript single-page application (SPA) that signs in users and calls Microsoft Graph by using the implicit flow of OAuth 2.0. This SPA uses MSAL.js v1.x, which uses the implicit grant flow for SPAs. For all new applications, use [MSAL.js v2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md), which provides more security than the implicit flow
In this tutorial:
In this tutorial:
> * Add code to support user sign-in and sign-out > * Add code to call Microsoft Graph API > * Test the app-
->[!TIP]
-> This tutorial uses MSAL.js v1.x which is limited to using the implicit grant flow for single-page applications. We recommend all new applications instead use [MSAL.js 2.x and the authorization code flow with PKCE and CORS](tutorial-v2-javascript-auth-code.md) support.
+> * Gain understanding of how the process works behind the scenes
+
+At the end of this tutorial, you'll have created the folder structure below (listed in order of creation), along with the *.js* and *.html* files by copying the code blocks in the upcoming sections.
+
+```txt
+sampleApp/
+Γö£ΓöÇΓöÇ JavaScriptSPA/
+Γöé Γö£ΓöÇΓöÇ authConfig.js
+Γöé Γö£ΓöÇΓöÇ authPopup.js
+Γöé Γö£ΓöÇΓöÇ graph.js
+Γöé Γö£ΓöÇΓöÇ graphConfig.js
+Γöé Γö£ΓöÇΓöÇ https://docsupdatetracker.net/index.html
+Γöé ΓööΓöÇΓöÇ ui.js
+Γö£ΓöÇΓöÇ package.json
+Γö£ΓöÇΓöÇ package-lock.json
+Γö£ΓöÇΓöÇ node_modules/
+Γöé ΓööΓöÇΓöÇ ...
+ΓööΓöÇΓöÇ server.js
+
+```
## Prerequisites
In this tutorial:
![Shows how the sample app generated by this tutorial works](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
-The sample application created by this guide enables a JavaScript SPA to query the Microsoft Graph API or a web API that accepts tokens from the Microsoft identity platform. In this scenario, after a user signs in, an access token is requested and added to HTTP requests through the authorization header. This token will be used to acquire the user's profile and mails via **MS Graph API**.
+The sample application created by this guide enables a JavaScript SPA to query the Microsoft Graph API. This can also work for a web API that is set up to accept tokens from the Microsoft identity platform. After the user signs in, an access token is requested and added to the HTTP requests through the authorization header. This token will be used to acquire the user's profile and mails via **MS Graph API**.
Token acquisition and renewal are handled by the [Microsoft Authentication Library (MSAL) for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js).
-## Set up your web server or project
+## Set up the web server or project
> Prefer to download this sample's project instead? [Download the project files](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/archive/quickstart.zip). >
-> To configure the code sample before you execute it, skip to the [configuration step](#register-your-application).
+> To configure the code sample before you execute it, skip to the [registration step](#register-the-application).
-## Create your project
+## Create the project
-Make sure you have [Node.js](https://nodejs.org/en/download/) installed, and then create a folder to host your application. There, we will implement a simple [Express](https://expressjs.com/) web server to serve your `https://docsupdatetracker.net/index.html` file.
+Make sure [*Node.js*](https://nodejs.org/en/download/) is installed, and then create a folder to host the application. Name the folder *sampleApp*. In this folder, an [*Express*](https://expressjs.com/) web server is created to serve the *https://docsupdatetracker.net/index.html* file.
-1. Using a terminal (such as Visual Studio Code integrated terminal), locate your project folder, then type:
+1. Using a terminal (such as Visual Studio Code integrated terminal), locate the project folder, move into it, then type:
- ```console
- npm init
- ```
+```console
+npm init
+```
-2. Next, install the required dependencies:
+2. A series of prompts will appear in order to create the application. Notice that the folder, *sampleApp* is now all lowercase. The items in brackets `()` are generated by default. Feel free to experiment, however for the purposes of this tutorial, you don't need to enter anything, and can press **Enter** to continue to the next prompt.
+
+ ```console
+ package name: (sampleapp)
+ version: (1.0.0)
+ description:
+ entry point: (index.js)
+ test command:
+ git repository:
+ keywords:
+
+ license: (ISC)
+ ```
+
+3. The final consent prompt will contain the following output on the assumption no values were entered in the previous step. Press **Enter** and the JSON written to a file called *package.json*.
+
+ ```console
+ {
+ "name": "sampleapp",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1"
+ },
+ "author": "",
+ "license": "ISC"
+ }
+
+
+ Is this OK? (yes)
+ ```
+
+4. Next, install the required dependencies. Express.js is a Node.js module designed to simplify the creation of web servers and APIs. Morgan.js is used to log HTTP requests and errors. Upon installation, the *package-lock.json* file and *node_modules* folder are created.
```console npm install express --save npm install morgan --save ```
-1. Now, create a .js file named `server.js`, and then add the following code:
+5. Now, create a *.js* file named *server.js* in your current folder, and add the following code:
```JavaScript const express = require('express');
Make sure you have [Node.js](https://nodejs.org/en/download/) installed, and the
console.log('Listening on port ' + port + '...'); ```
-You now have a simple server to serve your SPA. The intended folder structure at the end of this tutorial is as follows:
+You now have a server to serve the SPA. At this point, the folder structure should look like this:
+
+```txt
+sampleApp/
+Γö£ΓöÇΓöÇ package.json
+Γö£ΓöÇΓöÇ package-lock.json
+Γö£ΓöÇΓöÇ node_modules/
+Γöé ΓööΓöÇΓöÇ ...
+ΓööΓöÇΓöÇ server.js
+```
-![a text depiction of the intended SPA folder structure](./media/tutorial-v2-javascript-spa/single-page-application-folder-structure.png)
+In the next steps you'll create a new folder for the JavaScript SPA, and set up the user interface (UI).
+
+> [!TIP]
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization, and is primarily associated with a domain, like Microsoft.com. If you wish to learn how applications can work with multiple tenants, refer to the [application model](https://docs.microsoft.com/azure/active-directory/develop/application-model).
## Create the SPA UI
-1. Create an `https://docsupdatetracker.net/index.html` file for your JavaScript SPA. This file implements a UI built with **Bootstrap 4 Framework** and imports script files for configuration, authentication and API call.
+1. Create a new folder, *JavaScriptSPA* and then move into that folder.
+
+1. From there, create an *https://docsupdatetracker.net/index.html* file for the SPA. This file implements a UI built with [*Bootstrap 4 Framework*](https://www.javatpoint.com/bootstrap-4-layouts#:~:text=Bootstrap%204%20is%20the%20newest%20version%20of%20Bootstrap.,framework%20directed%20at%20responsive%2C%20mobile-first%20front-end%20web%20development.) and imports script files for configuration, authentication and API call.
- In the `https://docsupdatetracker.net/index.html` file, add the following code:
+ In the *https://docsupdatetracker.net/index.html* file, add the following code:
```html <!DOCTYPE html>
You now have a simple server to serve your SPA. The intended folder structure at
</html> ```
- > [!TIP]
- > You can replace the version of MSAL.js in the preceding script with the latest released version under [MSAL.js releases](https://github.com/AzureAD/microsoft-authentication-library-for-js/releases).
-
-2. Now, create a .js file named `ui.js`, which will access and update DOM elements, and add the following code:
+2. Now, create a *.js* file named *ui.js*, which accesses and updates the Document Object Model (DOM) elements, and add the following code:
```JavaScript // Select DOM elements to work with
You now have a simple server to serve your SPA. The intended folder structure at
} ```
-## Register your application
+## Register the application
-Before proceeding further with authentication, register your application on **Azure Active Directory**.
+Before proceeding further with authentication, register the application on **Azure Active Directory**.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. Sign in to the [*Azure portal*](https://portal.azure.com/).
+1. Navigate to **Azure Active Directory**.
+1. Go to the left panel, and under **Manage**, select **App registrations**, then in the top menu bar, select **New registration**.
+1. Enter a **Name** for the application, for example **sampleApp**. The name can be changed later if necessary.
1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
-1. In the **Redirect URI** section, select the **Web** platform from the drop-down list, and then set the value to the application URL that's based on your web server.
+1. In the **Redirect URI** section, select the **Web** platform from the drop-down list. To the right, enter the value of the local host to be used. Enter either of the following options:
+ 1. `http://localhost:3000/`
+ 1. If you wish to use a custom TCP port, use `http://localhost:<port>/` (where `<port>` is the custom TCP port number).
1. Select **Register**.
-1. On the app **Overview** page, note the **Application (client) ID** value for later use.
-1. Under **Manage**, select **Authentication**.
+1. This opens the **Overview** page of the application. Note the **Application (client) ID** and **Directory (tenant) ID**. Both of them are needed when the *authConfig.js* file is created in the following steps.
+1. In the left panel, under **Manage**, select **Authentication**.
1. In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app must sign in users and call an API.
-1. Select **Save**.
+1. Select **Save**. You can navigate back to the **Overview** panel by selecting it in the left panel.
-> ### Set a redirect URL for Node.js
->
-> For Node.js, you can set the web server port in the *server.js* file. This tutorial uses port 3000, but you can use any other available port.
->
-> To set up a redirect URL in the application registration information, switch back to the **Application Registration** pane, and do either of the following:
->
-> - Set *`http://localhost:3000/`* as the **Redirect URL**.
-> - If you're using a custom TCP port, use *`http://localhost:<port>/`* (where *\<port>* is the custom TCP port number).
-> 1. Copy the **URL** value.
-> 1. Switch back to the **Application Registration** pane, and paste the copied value as the **Redirect URL**.
->
+The redirect URI can be changed at anytime by going to the **Overview** page, and selecting **Add a Redirect URI**.
-### Configure your JavaScript SPA
+## Configure the JavaScript SPA
-Create a new .js file named `authConfig.js`, which will contain your configuration parameters for authentication, and add the following code:
+1. In the *JavaScriptSPA* folder, create a new file, *authConfig.js*, and copy the following code. This code contains the configuration parameters for authentication (Client ID, Tenant ID, Redirect URI).
```javascript const msalConfig = { auth: { clientId: "Enter_the_Application_Id_Here", authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here",
+ redirectUri: "Enter_the_Redirect_URI_Here",
}, cache: { cacheLocation: "sessionStorage", // This configures where your cache will be stored
Create a new .js file named `authConfig.js`, which will contain your configurati
}; ```
-Modify the values in the `msalConfig` section as described here:
+Modify the values in the `msalConfig` section. You can refer to your app's **Overview** page on Azure for some of these values:
+ - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
-- *\<Enter_the_Application_Id_Here>* is the **Application (client) ID** for the application you registered.-- *\<Enter_the_Cloud_Instance_Id_Here>* is the instance of the Azure cloud. For the main or global Azure cloud, enter *https://login.microsoftonline.com*. For **national** clouds (for example, China), see [National clouds](./authentication-national-cloud.md).-- Set *\<Enter_the_Tenant_info_here>* to one of the following options:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name** (for example, *contoso.microsoft.com*).
- - If your application supports *accounts in any organizational directory*, replace this value with **organizations**.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**.
+2. Modify the values in the `msalConfig` section as described below. Refer to the **Overview** page of the application for these values:
+ - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered. You can copy this directly from **Azure**.
+
+ - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), refer to [*National clouds*](./authentication-national-cloud.md).
+ - Set `Enter_the_Tenant_info_here` to one of the following options:
+ - If your application supports *accounts in this organizational directory*, replace this value with the **Directory (tenant) ID** or **Tenant name** (for example, *contoso.microsoft.com*).
+ - `Enter_the_Redirect_URI_Here` is the default URL that you set in the previous section, `http://localhost:3000/`.
+
+> [!TIP]
+> There are other options for `Enter_the_Tenant_info_here` depending on what you want your application to support.
+> - If your application supports *accounts in any organizational directory*, replace this value with **organizations**.
+> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**.
## Use the Microsoft Authentication Library (MSAL) to sign in the user
-Create a new .js file named `authPopup.js`, which will contain your authentication and token acquisition logic, and add the following code:
+In the *JavaScriptSPA* folder, create a new *.js* file named *authPopup.js*, which contains the authentication and token acquisition logic, and add the following code:
```JavaScript const myMSALObj = new Msal.UserAgentApplication(msalConfig);
Create a new .js file named `authPopup.js`, which will contain your authenticati
} ```
-### More information
+## More information
-After a user selects the **Sign In** button for the first time, the `signIn` method calls `loginPopup` to sign in the user. This method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. A token is received, processed by `msal.js`, and the information contained in the token is cached. This token is known as the *ID token* and contains basic information about the user, such as the user display name. If you plan to use any data provided by this token for any purposes, make sure this token is validated by your backend server to guarantee that the token was issued to a valid user for your application.
+The first time a user selects the **Sign In** button, the `signIn` function you added to the *authPopup.js* file calls MSAL's `loginPopup` function to start the sign-in process. This method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, the user is redirected back to the original *https://docsupdatetracker.net/index.html* page. A token is received, processed by *msal.js*, and the information contained in the token is cached. This token is known as the *ID token* and contains basic information about the user, such as the user display name. If you plan to use any data provided by this token for any purposes, make sure this token is validated by your backend server to guarantee that the token was issued to a valid user for your application.
-The SPA generated by this guide calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token* used to query the Microsoft Graph API for user profile info. If you need a sample that validates the ID token, take a look at [this](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi-v2 "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") sample application in GitHub. The sample uses an ASP.NET web API for token validation.
+The SPA generated by this tutorial calls `acquireTokenSilent` and/or `acquireTokenPopup` to acquire an *access token* used to query the Microsoft Graph API for the user's profile info. If you need a sample that validates the ID token, refer to the following [sample application](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/3-Authorization-II/1-call-api/README.md "GitHub active-directory-javascript-singlepageapp-dotnet-webapi-v2 sample") in GitHub, which uses an ASP.NET web API for token validation.
-#### Get a user token interactively
+### Get a user token interactively
-After the initial sign-in, you do not want to ask users to reauthenticate every time they need to request a token to access a resource. So *acquireTokenSilent* should be used most of the time to acquire tokens. There are situations, however, where you force users to interact with Microsoft identity platform. Examples include:
+After the initial sign-in, users shouldn't need to reauthenticate every time they need to request a token to access a resource. Therefore, `acquireTokenSilent` should be used most of the time to acquire tokens. There are situations, however, where you force users to interact with Microsoft identity platform. Examples include when:
- Users need to reenter their credentials because the password has expired.-- Your application is requesting access to a resource, and you need the user's consent.
+- An application is requesting access to a resource, and the user's consent is needed.
- Two-factor authentication is required.
-Calling *acquireTokenPopup* opens a pop-up window (or *acquireTokenRedirect* redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication.
+Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication.
-#### Get a user token silently
+### Get a user token silently
-The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, `acquireTokenSilent` is the method commonly used to obtain tokens used to access protected resources for subsequent calls. (Calls to request or renew tokens are made silently.)
-`acquireTokenSilent` may fail in some cases. For example, the user's password may have expired. Your application can handle this exception in two ways:
+The `acquireTokenSilent` method handles token acquisition and renewal without any user interaction. After `loginPopup` (or `loginRedirect`) is executed for the first time, `acquireTokenSilent` is the method commonly used to obtain tokens used to access protected resources for subsequent calls. (Calls to request or renew tokens are made silently.) It's worth noting that `acquireTokenSilent` may fail in some cases, such as when a user's password expires. The application can then handle this exception in two ways:
-1. Make a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where there is no unauthenticated content in the application available to the user. The sample generated by this guided setup uses this pattern.
+1. Making a call to `acquireTokenPopup` immediately, which triggers a user sign-in prompt. This pattern is commonly used in online applications where there's no unauthenticated content in the application available to the user. The sample generated by this guided setup uses this pattern.
-1. Applications can also make a visual indication to the user that an interactive sign-in is required, so the user can select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time. This is commonly used when the user can use other functionality of the application without being disrupted. For example, there might be unauthenticated content available in the application. In this situation, the user can decide when they want to sign in to access the protected resource, or to refresh the outdated information.
+1. Making a visual indication to the user that an interactive sign-in is required. The user can then select the right time to sign in, or the application can retry `acquireTokenSilent` at a later time. This is commonly used when the user can use other functionality of the application without being disrupted. For example, there might be unauthenticated content available in the application. In this situation, the user can decide when they want to sign in to access the protected resource, or to refresh the outdated information.
> [!NOTE]
-> This quickstart uses the `loginPopup` and `acquireTokenPopup` methods by default. If you are using Internet Explorer as your browser, it is recommended to use `loginRedirect` and `acquireTokenRedirect` methods, due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) related to the way Internet Explorer handles pop-up windows. If you would like to see how to achieve the same result using `Redirect methods`, please [see](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
+> This tutorial uses the `loginPopup` and `acquireTokenPopup` methods by default. If you are using Internet Explorer as your browser, it is recommended to use `loginRedirect` and `acquireTokenRedirect` methods, due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues-on-IE-and-Edge-Browser#issues) related to the way Internet Explorer handles pop-up windows. If you would like to see how to achieve the same result using *Redirect methods*, please see the [sample code](https://github.com/Azure-Samples/active-directory-javascript-graphapi-v2/blob/quickstart/JavaScriptSPA/authRedirect.js).
-## Call the Microsoft Graph API by using the token you just acquired
+## Call the Microsoft Graph API using the acquired token
-1. First, create a .js file named `graphConfig.js`, which will store your REST endpoints. Add the following code:
+1. In the *JavaScriptSPA* folder create a *.js* file named *graphConfig.js*, which stores the Representational State Transfer ([REST](https://docs.microsoft.com/rest/api/azure/)) endpoints. Add the following code:
```JavaScript const graphConfig = {
The `acquireTokenSilent` method handles token acquisition and renewal without an
}; ```
- Where:
- - *\<Enter_the_Graph_Endpoint_Here>* is the instance of MS Graph API. For the global MS Graph API endpoint, simply replace this string with `https://graph.microsoft.com`. For national cloud deployments, please refer to [Graph API Documentation](/graph/deployments).
+ where:
+ - `Enter_the_Graph_Endpoint_Here` is the instance of Microsoft Graph API. For the global Microsoft Graph API endpoint, this can be replaced with `https://graph.microsoft.com`. For national cloud deployments, refer to [Graph API Documentation](/graph/deployments).
-1. Next, create a .js file named `graph.js`, which will make a REST call to Microsoft Graph API, and add the following code:
+1. Next, create a *.js* file named *graph.js*, which will make a REST call to the Microsoft Graph API. This is a way of accessing web services in a simple and flexible way without having any processing. Add the following code:
```javascript function callMSGraph(endpoint, token, callback) {
The `acquireTokenSilent` method handles token acquisition and renewal without an
} ```
-### More information about making a REST call against a protected API
+### More information about REST calls against a protected API
+
+In the sample application created by this guide, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. For the sample application created by this guide, the resource is the Microsoft Graph API `me` endpoint, which displays the user's profile information.
-In the sample application created by this guide, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. For the sample application created by this guide, the resource is the Microsoft Graph API *me* endpoint, which displays the user's profile information.
+## Test the code
-## Test your code
+Now that the code is set up, it needs to be tested.
-1. Configure the server to listen to a TCP port that's based on the location of your *https://docsupdatetracker.net/index.html* file. For Node.js, start the web server to listen to the port by running the following commands at a command-line prompt from the application folder:
+1. The server needs to be configured to listen to a TCP port that's based on the location of the *https://docsupdatetracker.net/index.html* file. For Node.js, the web server can be started to listen to the port that is specified in the previous section by running the following commands at a command-line prompt from the *JavaScriptSPA* folder:
```bash npm install npm start ```
-1. In your browser, enter **http://localhost:3000** or **http://localhost:{port}**, where *port* is the port that your web server is listening to. You should see the contents of your *https://docsupdatetracker.net/index.html* file and the **Sign In** button.
+1. In the browser, enter `http://localhost:3000` (or `http://localhost:<port>` if a custom port was chosen). You should see the contents of the *https://docsupdatetracker.net/index.html* file and a **Sign In** button on the top right of the screen.
-> [!Important]
-> Enable popups and redirects for your site in your browser settings.
-After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform:
+> [!IMPORTANT]
+> Be sure to enable popups and redirects for your site in your browser settings.
-![The JavaScript SPA account sign-in window](media/active-directory-develop-guidedsetup-javascriptspa-test/javascriptspascreenshot1.png)
+After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You'll now be prompted to sign in with the Microsoft identity platform:
+ ### Provide consent for application access
-The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in:
+The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in. Select **Accept** to continue.
-![The "Permissions requested" window](media/active-directory-develop-guidedsetup-javascriptspa-test/javascriptspaconsent.png)
### View application results
-After you sign in, your user profile information is returned in the Microsoft Graph API response that's displayed:
+After you sign in, you can select **Read More under your displayed name, and your user profile information is returned in the Microsoft Graph API response that's displayed:
-![Results from the Microsoft Graph API call](media/active-directory-develop-guidedsetup-javascriptspa-test/javascriptsparesults.png)
### More information about scopes and delegated permissions
-The Microsoft Graph API requires the *user.read* scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, as well as custom APIs for your back-end server, might require additional scopes. For example, the Microsoft Graph API requires the *Mail.Read* scope in order to list the userΓÇÖs mails.
+The Microsoft Graph API requires the `User.Read` scope to read a user's profile. By default, this scope is automatically added in every application that's registered on the registration portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require more scopes. For example, the Microsoft Graph API requires the `Mail.Read` scope in order to list the userΓÇÖs mails.
> [!NOTE] > The user might be prompted for additional consents as you increase the number of scopes. [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] + ## Next steps
-Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our the multi-part scenario series.
+Delve deeper into single-page application (SPA) development on the Microsoft identity platform in our multi-part scenario series.
> [!div class="nextstepaction"] > [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 09/22/2022 Last updated : 09/28/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on September 22nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on September 28th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 Apps for enterprise (device) | OFFICE_PROPLUS_DEVICE1 | ea4c5ec8-50e3-4193-89b9-50da5bd4cdc7 | OFFICE_PROPLUS_DEVICE (3c994f28-87d5-4273-b07a-eb6190852599) | Microsoft 365 Apps for Enterprise (Device) (3c994f28-87d5-4273-b07a-eb6190852599) | | Microsoft 365 Apps for Faculty | OFFICESUBSCRIPTION_FACULTY | 12b8c807-2e20-48fc-b453-542b6ee9d171 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91) | | Microsoft 365 Apps for Students | OFFICESUBSCRIPTION_STUDENT | c32f9321-a627-406d-a114-1f9c81aaafac | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) |
-| Microsoft 365 Audio Conferencing for GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
+| Microsoft 365 Audio Conferencing for GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
| Microsoft 365 Audio Conferencing Pay-Per-Minute - EA | MCOMEETACPEA | df9561a4-4969-4e6a-8e73-c601b68ec077 | MCOMEETACPEA (bb038288-76ab-49d6-afc1-eaa6c222c65a) | Microsoft 365 Audio Conferencing Pay-Per-Minute (bb038288-76ab-49d6-afc1-eaa6c222c65a) | | Microsoft 365 Business Basic | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 Business Basic | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) |
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Service without current support might have compatibility issues with the new gue
- Forms - Project - Yammer
+- Planner in SharePoint
## Frequently asked questions (FAQ)
active-directory Self Service Sign Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-overview.md
Previously updated : 03/02/2021 Last updated : 09/28/2022
# Self-service sign-up
-When sharing an application with external users, you might not always know in advance who will need access to the application. As an alternative to sending invitations directly to individuals, you can allow external users to sign up for specific applications themselves by enabling self-service sign-up. You can create a personalized sign-up experience by customizing the self-service sign-up user flow. For example, you can provide options to sign up with Azure AD or social identity providers and collect information about the user during the sign-up process.
+When sharing an application with external users, you might not always know in advance who will need access to the application. As an alternative to sending invitations directly to individuals, you can allow external users to sign up for specific applications themselves by enabling [self-service sign-up user flow](self-service-sign-up-user-flow.md). You can create a personalized sign-up experience by customizing the self-service sign-up user flow. For example, you can provide options to sign up with Azure AD or social identity providers and collect information about the user during the sign-up process.
> [!NOTE] > You can associate user flows with apps built by your organization. User flows can't be used for Microsoft apps, like SharePoint or Teams.
You can configure user flow settings to control how the user signs up for the ap
- Account types used for sign-in, such as social accounts like Facebook, or Azure AD accounts - Attributes to be collected from the user signing up, such as first name, postal code, or country/region of residency
-When a user wants to sign in to your application, whether it's a web, mobile, desktop, or single-page application (SPA), the application initiates an authorization request to the user flow-provided endpoint. The user flow defines and controls the user's experience. When the user completes the sign-up user flow, Azure AD generates a token and redirects the user back to your application. Upon completion of sign-up, a guest account is provisioned for the user in the directory. Multiple applications can use the same user flow.
+The user can sign in to your application, via the web, mobile, desktop, or single-page application (SPA). The application initiates an authorization request to the user flow-provided endpoint. The user flow defines and controls the user's experience. When the user completes the sign-up user flow, Azure AD generates a token and redirects the user back to your application. Upon completion of sign-up, a guest account is provisioned for the user in the directory. Multiple applications can use the same user flow.
## Example of self-service sign-up
-The following example illustrates how we're bringing social identity providers to Azure AD with self-service sign up capabilities for guest users.
+The following example illustrates how we're bringing social identity providers to Azure AD with self-service sign-up capabilities for guest users.
A partner of Woodgrove opens the Woodgrove app. They decide they want to sign up for a supplier account, so they select Request your supplier account, which initiates the self-service sign-up flow. ![Example of self-service sign-up starting page](media/self-service-sign-up-overview/example-start-sign-up-flow.png)
The user enters the information, continues the sign-up flow, and gets access to
## Next steps
- For details, see how to [add self-service sign-up to an app](self-service-sign-up-user-flow.md).
+ For details, see how to [add self-service sign-up to an app](self-service-sign-up-user-flow.md).
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
The table depicts common scenarios and the recommended technology.
## Automate provisioning users into non-Microsoft applications
-After identities are in Azure AD through HR-provisioning or Azure AD Connect Could Sync / Azure AD Connect Sync, the employee can use the identity to access Teams, SharePoint, and Microsoft 365 applications. However, employees still need access to many Microsoft applications to perform their work.
+After identities are in Azure AD through HR-provisioning or Azure AD Connect cloud sync / Azure AD Connect sync, the employee can use the identity to access Teams, SharePoint, and Microsoft 365 applications. However, employees still need access to many Microsoft applications to perform their work.
![Automation decision matrix](media/automate-user-provisioning-to-applications-solutions/automate-provisioning-decision-matrix.png)
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/choose-ad-authn.md
Details on decision questions:
2. Azure AD can hand off user sign-in to a trusted authentication provider such as MicrosoftΓÇÖs AD FS. 3. If you need to apply, user-level Active Directory security policies such as account expired, disabled account, password expired, account locked out, and sign-in hours on each user sign-in, Azure AD requires some on-premises components. 4. Sign-in features not natively supported by Azure AD:
- * Sign-in using smartcards or certificates.
* Sign-in using on-premises MFA Server. * Sign-in using third-party authentication solution. * Multi-site on-premises authentication solution.
The following diagrams outline the high-level architecture components required f
|Is there a health monitoring solution?|Not required|Agent status provided by [Azure Active Directory admin center](../../active-directory/hybrid/tshoot-connect-pass-through-authentication.md)|[Azure AD Connect Health](../../active-directory/hybrid/how-to-connect-health-adfs.md)| |Do users get single sign-on to cloud resources from domain-joined devices within the company network?|Yes with [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)|Yes with [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)|Yes| |What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-install-custom.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-pta-faq.yml)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)|
-|Is Windows Hello for Business supported?|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br>*Requires Windows Server 2016 Domain functional level*|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Certificate trust model](/windows/security/identity-protection/hello-for-business/hello-key-trust-adfs)|
+|Is Windows Hello for Business supported?|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>*Both require Windows Server 2016 Domain functional level*|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Hybrid Cloud Trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)<br><br>[Certificate trust model](/windows/security/identity-protection/hello-for-business/hello-key-trust-adfs)|
|What are the multifactor authentication options?|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Azure MFA server](../../active-directory/authentication/howto-mfaserver-deploy.md)<br><br>[Third-party MFA](/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)| |What user account states are supported?|Disabled accounts<br>(up to 30-minute delay)|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours| |What are the Conditional Access options?|[Azure AD Conditional Access, with Azure AD Premium](../../active-directory/conditional-access/overview.md)|[Azure AD Conditional Access, with Azure AD Premium](../../active-directory/conditional-access/overview.md)|[Azure AD Conditional Access, with Azure AD Premium](../../active-directory/conditional-access/overview.md)<br><br>[AD FS claim rules](https://adfshelp.microsoft.com/AadTrustClaims/ClaimsGenerator)|
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Microsoft provides support for this public preview release, but it might not be
These limitations and known issues are specific to group writeback: -- Cloud [distribution list groups](/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online can't be written back to Active Directory. Only Microsoft 365 and Azure AD security groups are supported. -- When you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups by default. This behavior is for backward compatibility with the current version of group writeback. You can modify this behavior by following the steps in [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md). -- When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory instance until you hard delete it in Azure AD. You can modify this behavior by following the steps in [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md). -- Group writeback does not support writeback of nested group members that have a scope of **Domain local** in Active Directory, because Azure AD security groups are written back with a scope of **Universal**. 
+- Cloud [distribution list groups](https://docs.microsoft.com/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported.
+- To be backwards compatible with the current version of group writeback, when you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups, by default.
+- When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory, until hard deleted in Azure AD. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md)
+- Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group.
+- Group Writeback only supports writing back groups to a single Organization Unit (OU). Once the feature is enabled, you cannot change the OU you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature. 
+- Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD.
+- Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group. 
+ If you have a nested group like this, you'll see an export error in Azure AD Connect with the message "A universal group cannot have a local group as a member." The resolution is to remove the member with the **Domain local** scope from the Azure AD group, or update the nested group member scope in Active Directory to **Global** or **Universal**. - Group writeback supports writing back groups to only a single organizational unit (OU). After the feature is enabled, you can't change the OU that you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.ΓÇ»
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
If you see an error or have problems with connectivity, then see [Troubleshoot c
The following sections describe the pages in the **Sync** section. ### Connect your directories
-To connect to Active Directory Domain Services (Azure AD DS), Azure AD Connect needs the forest name and credentials of an account that has sufficient permissions.
+To connect to Active Directory Domain Services (AD DS), Azure AD Connect needs the forest name and credentials of an account that has sufficient permissions.
![Screenshot that shows the "Connect your directories" page.](./media/how-to-connect-install-custom/connectdir01.png)
After you enter the forest name and select **Add Directory**, a window appears.
| Option | Description | | | |
-| Create new account | Create the Azure AD DS account that Azure AD Connect needs to connect to the Active Directory forest during directory synchronization. After you select this option, enter the username and password for an enterprise admin account. Azure AD Connect uses the provided enterprise admin account to create the required Azure AD DS account. You can enter the domain part in either NetBIOS format or FQDN format. That is, enter *FABRIKAM\administrator* or *fabrikam.com\administrator*. |
-| Use existing account | Provide an existing Azure AD DS account that Azure AD Connect can use to connect to the Active Directory forest during directory synchronization. You can enter the domain part in either NetBIOS format or FQDN format. That is, enter *FABRIKAM\syncuser* or *fabrikam.com\syncuser*. This account can be a regular user account because it needs only the default read permissions. But depending on your scenario, you might need more permissions. For more information, see [Azure AD Connect accounts and permissions](reference-connect-accounts-permissions.md#create-the-ad-ds-connector-account). |
+| Create new account | Create the AD DS account that Azure AD Connect needs to connect to the Active Directory forest during directory synchronization. After you select this option, enter the username and password for an enterprise admin account. Azure AD Connect uses the provided enterprise admin account to create the required AD DS account. You can enter the domain part in either NetBIOS format or FQDN format. That is, enter *FABRIKAM\administrator* or *fabrikam.com\administrator*. |
+| Use existing account | Provide an existing AD DS account that Azure AD Connect can use to connect to the Active Directory forest during directory synchronization. You can enter the domain part in either NetBIOS format or FQDN format. That is, enter *FABRIKAM\syncuser* or *fabrikam.com\syncuser*. This account can be a regular user account because it needs only the default read permissions. But depending on your scenario, you might need more permissions. For more information, see [Azure AD Connect accounts and permissions](reference-connect-accounts-permissions.md#create-the-ad-ds-connector-account). |
![Screenshot showing the "Connect Directory" page and the A D forest account window, where you can choose to create a new account or use an existing account.](./media/how-to-connect-install-custom/connectdir02.png) >[!NOTE]
-> As of build 1.4.18.0, you can't use an enterprise admin or domain admin account as the Azure AD DS connector account. When you select **Use existing account**, if you try to enter an enterprise admin account or a domain admin account, you see the following error: "Using an Enterprise or Domain administrator account for your AD forest account is not allowed. Let Azure AD Connect create the account for you or specify a synchronization account with the correct permissions."
+> As of build 1.4.18.0, you can't use an enterprise admin or domain admin account as the AD DS connector account. When you select **Use existing account**, if you try to enter an enterprise admin account or a domain admin account, you see the following error: "Using an Enterprise or Domain administrator account for your AD forest account is not allowed. Let Azure AD Connect create the account for you or specify a synchronization account with the correct permissions."
> ### Azure AD sign-in configuration
-On the **Azure AD sign-in configuration** page, review the user principal name (UPN) domains in on-premises Azure AD DS. These UPN domains have been verified in Azure AD. On this page, you configure the attribute to use for the userPrincipalName.
+On the **Azure AD sign-in configuration** page, review the user principal name (UPN) domains in on-premises AD DS. These UPN domains have been verified in Azure AD. On this page, you configure the attribute to use for the userPrincipalName.
![Screenshot showing unverified domains on the "Azure A D sign-in configuration" page.](./media/how-to-connect-install-custom/aadsigninconfig2.png)
If you see this warning, make sure that these domains are indeed unreachable and
On the **Identifying users** page, choose how to identify users in your on-premises directories and how to identify them by using the sourceAnchor attribute. #### Select how users should be identified in your on-premises directories
-By using the *Matching across forests* feature, you can define how users from your Azure AD DS forests are represented in Azure AD. A user might be represented only once across all forests or might have a combination of enabled and disabled accounts. The user might also be represented as a contact in some forests.
+By using the *Matching across forests* feature, you can define how users from your AD DS forests are represented in Azure AD. A user might be represented only once across all forests or might have a combination of enabled and disabled accounts. The user might also be represented as a contact in some forests.
![Screenshot showing the page where you can uniquely identify your users.](./media/how-to-connect-install-custom/unique2.png)
The AD FS service requires a domain service account to authenticate users and to
If you selected **Create a group Managed Service Account** and this feature has never been used in Active Directory, then enter your enterprise admin credentials. These credentials are used to initiate the key store and enable the feature in Active Directory. > [!NOTE]
-> Azure AD Connect checks whether the AD FS service is already registered as a service principal name (SPN) in the domain. Azure AD DS doesn't allow duplicate SPNs to be registered at the same time. If a duplicate SPN is found, you can't proceed further until the SPN is removed.
+> Azure AD Connect checks whether the AD FS service is already registered as a service principal name (SPN) in the domain. AD DS doesn't allow duplicate SPNs to be registered at the same time. If a duplicate SPN is found, you can't proceed further until the SPN is removed.
![Screenshot showing the "A D F S service account" page.](./media/how-to-connect-install-custom/adfs5.png)
Azure AD Connect verifies the DNS settings when you select the **Verify** button
To validate end-to-end authentication, manually perform one or more of the following tests: * When synchronization finishes, in Azure AD Connect, use the **Verify federated login** additional task to verify authentication for an on-premises user account that you choose.
-* From a domain-joined machine on the intranet, ensure that you can sign in from a browser. Connect to https://myapps.microsoft.com. Then use your logged-on account to verify the sign-in. The built-in Azure AD DS administrator account isn't synchronized, and you can't use it for verification.
+* From a domain-joined machine on the intranet, ensure that you can sign in from a browser. Connect to https://myapps.microsoft.com. Then use your logged-on account to verify the sign-in. The built-in AD DS administrator account isn't synchronized, and you can't use it for verification.
* Ensure that you can sign in from a device on the extranet. On a home machine or a mobile device, connect to https://myapps.microsoft.com. Then provide your credentials. * Validate rich client sign-in. Connect to https://testconnectivity.microsoft.com. Then select **Office 365** > **Office 365 Single Sign-On Test**.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
For reference, this snippet is what it should look like:
</configuration> ```
-For information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://blogs.technet.microsoft.com/enterprisemobility/2014/06/28/aad-password-sync-encryption-and-fips-compliance/).
+For information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/aad-password-sync-encryption-and-fips-compliance/ba-p/243709).
## Troubleshoot password hash synchronization If you have problems with password hash synchronization, see [Troubleshoot password hash synchronization](tshoot-connect-password-hash-synchronization.md).
active-directory How To Connect Sync Change Addsacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-addsacct-pass.md
-# Changing the AD DS account password
-The AD DS account refers to the user account used by Azure AD Connect to communicate with on-premises Active Directory. If you change the password of the AD DS account, you must update Azure AD Connect Synchronization Service with the new password. Otherwise, the Synchronization can no longer synchronize correctly with the on-premises Active Directory and you will encounter the following errors:
+# Changing the AD DS connector account password
+The AD DS connector account refers to the user account used by Azure AD Connect to communicate with on-premises Active Directory. If you change the password of the AD DS connector account in AD, you must update Azure AD Connect Synchronization Service with the new password. Otherwise, the Synchronization can no longer synchronize correctly with the on-premises Active Directory and you will encounter the following errors:
* In the Synchronization Service Manager, any import or export operation with on-premises AD fails with **no-start-credentials** error. * Under Windows Event Viewer, the application event log contains an error with **Event ID 6000** and message **'The management agent "contoso.com" failed to run because the credentials were invalid'**.
-## How to update the Synchronization Service with new password for AD DS account
+## How to update the Synchronization Service with new password for AD DS connector account
To update the Synchronization Service with the new password: 1. Start the Synchronization Service Manager (START → Synchronization Service).
To update the Synchronization Service with the new password:
2. Go to the **Connectors** tab.
-3. Select the **AD Connector** that corresponds to the AD DS account for which its password was changed.
+3. Select the **AD Connector** that corresponds to the AD DS connector account for which its password was changed.
4. Under **Actions**, select **Properties**. 5. In the pop-up dialog, select **Connect to Active Directory Forest**:
-6. Enter the new password of the AD DS account in the **Password** textbox.
+6. Enter the new password of the AD DS connector account in the **Password** textbox.
7. Click **OK** to save the new password and close the pop-up dialog.
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsynctools.md
with: UserPrincipalName, Mail, SourceAnchor, DistinguishedName, CsObjectId, Obje
### EXAMPLES #### EXAMPLE 1 ```
-Export-ADSyncToolsDisconnectors -SyncObjectType 'PublicFolder'
+Export-ADSyncToolsAadDisconnectors -SyncObjectType 'PublicFolder'
``` Exports to CSV all PublicFolder Disconnector objects #### EXAMPLE 2 ```
-Export-ADSyncToolsDisconnectors
+Export-ADSyncToolsAadDisconnectors
``` Exports to CSV all Disconnector objects ### PARAMETERS
active-directory Whatis Hybrid Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-hybrid-identity.md
Here are some common hybrid identity and access management scenarios with recomm
|Enable cloud-based multi-factor authentication solutions.|![Recommended](./media/whatis-hybrid-identity/ic195031.png)|![Recommended](./media/whatis-hybrid-identity/ic195031.png)|![Recommended](./media/whatis-hybrid-identity/ic195031.png)| |Enable on-premises multi-factor authentication solutions.| | |![Recommended](./media/whatis-hybrid-identity/ic195031.png)| |Support smartcard authentication for my users.<sup>4</sup>| | |![Recommended](./media/whatis-hybrid-identity/ic195031.png)|
-|Display password expiry notifications in the Office Portal and on the Windows 10 desktop.| | |![Recommended](./media/whatis-hybrid-identity/ic195031.png)|
> <sup>1</sup> Password hash synchronization with single sign-on. >
active-directory Olfeo Saas Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Login to Olfeo SAAS admin console. 1. Navigate to **Configuration > Annuaires**. 1. Create a new directory and then name it.
-1. Select **Azure** provider and then click on **Cr�er** to save the new directory.
+1. Select **Azure** provider and then click on **Créer** to save the new directory.
1. Navigate to the **Synchronisation** tab to see the **Tenant URL** and the **Jeton secret**. These values will be copied and pasted in the **Tenant URL** and **Secret Token** fields in the Provisioning tab of your Olfeo SAAS application in the Azure portal. ## Step 3. Add Olfeo SAAS from the Azure AD application gallery
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
In addition to choosing between manual and automatic mode, there are several opt
|--disable-image-cleaner|Disable the ImageCleaner feature for an AKS cluster|Yes, unless enable is specified| |--image-cleaner-interval-hours|This parameter determines the interval time (in hours) ImageCleaner will use to run. The default value is one week, the minimum value is 24 hours and the maximum is three months.|No|
+> [!NOTE]
+> After disabling ImageCleaner, the old configuration still exists. This means that if you enable the feature again without explicitly passing configuration, the existing value will be used rather than the default.
+ ## Enable ImageCleaner on your AKS cluster To create a new AKS cluster using the default interval, use [az aks create][az-aks-create]:
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
This article shows you how to use the open-source [kured (KUbernetes REboot Daem
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Understand the AKS node update experience
helm repo update
kubectl create namespace kured # Install kured in that namespace with Helm 3 (only on Linux nodes, kured is not working on Windows nodes)
-helm install kured kured/kured --namespace kured --set nodeSelector."kubernetes\.io/os"=linux
+helm install my-release kubereboot/kured --namespace kured --set nodeSelector."kubernetes.io/os"=linux
``` You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
When one of the replicas in the DaemonSet has detected that a node reboot is req
You can monitor the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command. The following example output shows a node with a status of *SchedulingDisabled* as the node prepares for the reboot process:
-```
+```output
NAME STATUS ROLES AGE VERSION aks-nodepool1-28993262-0 Ready,SchedulingDisabled agent 1h v1.11.7 ``` Once the update process is complete, you can view the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command with the `--output wide` parameter. This additional output lets you see a difference in *KERNEL-VERSION* of the underlying nodes, as shown in the following example output. The *aks-nodepool1-28993262-0* was updated in a previous step and shows kernel version *4.15.0-1039-azure*. The node *aks-nodepool1-28993262-1* that hasn't been updated shows kernel version *4.15.0-1037-azure*.
-```
+```output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-28993262-0 Ready agent 1h v1.11.7 10.240.0.4 <none> Ubuntu 16.04.6 LTS 4.15.0-1039-azure docker://3.0.4 aks-nodepool1-28993262-1 Ready agent 1h v1.11.7 10.240.0.5 <none> Ubuntu 16.04.6 LTS 4.15.0-1037-azure docker://3.0.4
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
[kubectl-get-nodes]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[install-azure-cli]: /cli/azure/install-azure-cli [DaemonSet]: concepts-clusters-workloads.md#statefulsets-and-daemonsets [aks-ssh]: ssh.md
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Example:
az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --role "Contributor" --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/custom-resource-group" ```
-For user-assigned kubelet identity which is outside the default woker node resource group, you need to assign the `Managed Identity Operator`on kubelet identity.
+For user-assigned kubelet identity which is outside the default worker node resource group, you need to assign the `Managed Identity Operator`on kubelet identity.
```azurecli-interactive az role assignment create --assignee <control-plane-identity-principal-id> --role "Managed Identity Operator" --scope "<kubelet-identity-resource-id>"
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
API Management offers both managed and self-hosted gateways:
>
-* **Self-hosted** - The [self-hosted gateway](self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway. It's useful for hybrid and multi-cloud scenarios where there is a requirement to run the gateways off Azure in the same environments where API backends are hosted. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+* **Self-hosted** - The [self-hosted gateway](self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway. It's useful for hybrid and multicloud scenarios where there's a requirement to run the gateways off of Azure in the same environments where API backends are hosted. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
* The self-hosted gateway is [packaged](self-hosted-gateway-overview.md#packaging) as a Linux-based Docker container and is commonly deployed to Kubernetes, including to [Azure Kubernetes Service](how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md) and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md). * Each self-hosted gateway is associated with a **Gateway** resource in a cloud-based API Management instance from which it receives configuration updates and communicates status. ## Feature comparison: Managed versus self-hosted gateways
-The following table compares features available in the managed gateway versus those in the self-hosted gateway. Differences are also shown between the managed gateway for dedicated service tiers (Developer, Basic, Standard, Premium) and for the Consumption tier.
+The following table compares features available in the managed gateway versus the features in the self-hosted gateway. Differences are also shown between the managed gateway for dedicated service tiers (Developer, Basic, Standard, Premium) and for the Consumption tier.
> [!NOTE] > * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways. > * See also self-hosted gateway [limitations](self-hosted-gateway-overview.md#limitations). - ### Infrastructure | Feature support | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
For estimated maximum gateway throughput in the API Management service tiers, se
## Next steps -- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about [API Management in a Hybrid and multicloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
- Learn more about using the [capacity metric](api-management-capacity.md) for scaling decisions - Learn about [observability capabilities](observability.md) in API Management
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). For deploying self-hosted gateway to a Kubernetes cluster, see the how-to article for deployment by using a [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) or [with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md). + > [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of Azure API Management to a Docker environment. + > [!NOTE] > Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. Learn how to [deploy with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) or using [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) to learn how to deploy self-hosted gateway to Kubernetes.
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of Azure API Management to a Kubernetes cluster by using Helm. + > [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
Last updated 05/25/2021
This article describes the steps for deploying the self-hosted gateway component of Azure API Management to a Kubernetes cluster. + > [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Last updated 12/17/2021
# Guidance for running self-hosted gateway on Kubernetes in production
-In order to run the self-hosted gateway in production, there are various aspects to take in to mind. For example, it should be deployed in a highly-available manner, use configuration backups to handle temporary disconnects and many more.
+In order to run the self-hosted gateway in production, there are various aspects to take in to mind. For example, it should be deployed in a highly available manner, use configuration backups to handle temporary disconnects and many more.
This article provides guidance on how to run [self-hosted gateway](./self-hosted-gateway-overview.md) on Kubernetes for production workloads to ensure that it will run smoothly and reliably. + ## Access token Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
An alternative is to use Kubernetes Event-driven Autoscaling (KEDA) allowing you
### Traffic-based autoscaling
-Kubernetes does not provide an out-of-the-box mechanism for traffic-based autoscaling.
+Kubernetes doesn't provide an out-of-the-box mechanism for traffic-based autoscaling.
Kubernetes Event-driven Autoscaling (KEDA) provides a few ways that can help with traffic-based autoscaling: -- You can scale based on metrics from a Kubernetes ingress if they are available in [Prometheus](https://keda.sh/docs/latest/scalers/prometheus/) or [Azure Monitor](https://keda.sh/docs/latest/scalers/azure-monitor/) by using an out-of-the-box scaler
+- You can scale based on metrics from a Kubernetes ingress if they're available in [Prometheus](https://keda.sh/docs/latest/scalers/prometheus/) or [Azure Monitor](https://keda.sh/docs/latest/scalers/azure-monitor/) by using an out-of-the-box scaler
- You can install [HTTP add-on](https://github.com/kedacore/http-add-on), which is available in beta, and scales based on the number of requests per second. ## Container resources
Consider using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/work
## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
-Here is an example of the security context for the self-hosted gateway:
+Here's an example of the security context for the self-hosted gateway:
```yml securityContext: allowPrivilegeEscalation: false
api-management Import Container App With Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-container-app-with-oas.md
Title: Import Azure Container App to Azure API Management | Microsoft Docs
+ Title: Import Azure Container App to Azure API Management | Microsoft Docs
description: This article shows you how to use Azure API Management to import a web API hosted in Azure Container Apps. documentationcenter: ''
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
This article explains how to migrate existing self-hosted gateway deployments to self-hosted gateway v2.
+> [!IMPORTANT]
+> Support for Azure API Management self-hosted gateway version 0 and version 1 container images is ending on 1 October 2023, along with its corresponding Configuration API v1. [Learn more in our deprecation documentation](./breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md)
+ ## What's new? As we strive to make it easier for customers to deploy our self-hosted gateway, we've **introduced a new configuration API** that removes the dependency on Azure Storage, unless you're using [API inspector](api-management-howto-api-inspector.md) or quotas.
Customer must use the new Configuration API v2 by changing their deployment scri
#### Available TLS cipher suites
-At launch, self-hosted gateway v2.0 only used a subset of the cipher suites that v1.x was using. As of v2.0.4, we have brought back all the cipher suites that v1.x supported.
+At launch, self-hosted gateway v2.0 only used a subset of the cipher suites that v1.x was using. As of v2.0.4, we've brought back all the cipher suites that v1.x supported.
You can learn more about the used cipher suites in [this article](self-hosted-gateway-overview.md#available-cipher-suites) or use v2.1.1 to [control what cipher suites to use](self-hosted-gateway-overview.md#managing-cipher-suites).
In order to make the migration easier, we have introduced new Azure Advisor reco
We highly recommend customers to use ["All Recommendations" overview in Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/All) to determine if a migration is required. Use the filtering options to see if one of the above recommendations is present.
+### Use Azure Resource Graph to identify Azure API Management instances
+
+This Azure Resource Graph query provides you with a list of impacted Azure API Management instances:
+
+```kusto
+AdvisorResources
+| where type == 'microsoft.advisor/recommendations'
+| where properties.impactedField == 'Microsoft.ApiManagement/service' and properties.category == 'OperationalExcellence'
+| extend
+ recommendationTitle = properties.shortDescription.solution
+| where recommendationTitle == 'Use self-hosted gateway v2' or recommendationTitle == 'Use Configuration API v2 for self-hosted gateways'
+| extend
+ instanceName = properties.impactedValue,
+ recommendationImpact = properties.impact,
+ recommendationMetadata = properties.extendedProperties,
+ lastUpdated = properties.lastUpdated
+| project tenantId, subscriptionId, resourceGroup, instanceName, recommendationTitle, recommendationImpact, recommendationMetadata, lastUpdated
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az graph query -q "AdvisorResources | where type == 'microsoft.advisor/recommendations' | where properties.impactedField == 'Microsoft.ApiManagement/service' and properties.category == 'OperationalExcellence' | extend recommendationTitle = properties.shortDescription.solution | where recommendationTitle == 'Use self-hosted gateway v2' or recommendationTitle == 'Use Configuration API v2 for self-hosted gateways' | extend instanceName = properties.impactedValue, recommendationImpact = properties.impact, recommendationMetadata = properties.extendedProperties, lastUpdated = properties.lastUpdated | project tenantId, subscriptionId, resourceGroup, instanceName, recommendationTitle, recommendationImpact, lastUpdated"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Search-AzGraph -Query "AdvisorResources | where type == 'microsoft.advisor/recommendations' | where properties.impactedField == 'Microsoft.ApiManagement/service' and properties.category == 'OperationalExcellence' | extend recommendationTitle = properties.shortDescription.solution | where recommendationTitle == 'Use self-hosted gateway v2' or recommendationTitle == 'Use Configuration API v2 for self-hosted gateways' | extend instanceName = properties.impactedValue, recommendationImpact = properties.impact, recommendationMetadata = properties.extendedProperties, lastUpdated = properties.lastUpdated | project tenantId, subscriptionId, resourceGroup, instanceName, recommendationTitle, recommendationImpact, lastUpdated"
+```
+
+# [Portal](#tab/azure-portal)
++
+- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.com</a>
+- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.us</a>
+- Azure China 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/AdvisorResources%0A%7C%20where%20type%20%3D%3D%20%27microsoft.advisor%2Frecommendations%27%0A%7C%20where%20properties.impactedField%20%3D%3D%20%27Microsoft.ApiManagement%2Fservice%27%20and%20properties.category%20%3D%3D%20%27OperationalExcellence%27%0A%7C%20extend%0A%20%20%20%20recommendationTitle%20%3D%20properties.shortDescription.solution%0A%7C%20where%20recommendationTitle%20%3D%3D%20%27Use%20self-hosted%20gateway%20v2%27%20or%20recommendationTitle%20%3D%3D%20%27Use%20Configuration%20API%20v2%20for%20self-hosted%20gateways%27%0A%7C%20extend%0A%20%20%20%20instanceName%20%3D%20properties.impactedValue%2C%0A%20%20%20%20recommendationImpact%20%3D%20properties.impact%2C%0A%20%20%20%20recommendationMetadata%20%3D%20properties.extendedProperties%2C%0A%20%20%20%20lastUpdated%20%3D%20properties.lastUpdated%0A%7C%20project%20tenantId%2C%20subscriptionId%2C%20resourceGroup%2C%20instanceName%2C%20recommendationTitle%2C%20recommendationImpact%2C%20recommendationMetadata%2C%20lastUpdated" target="_blank">portal.azure.cn</a>
+++ ## Known limitations Here's a list of known limitations for the self-hosted gateway v2:
Here's a list of known limitations for the self-hosted gateway v2:
## Next steps -- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about [API Management in a Hybrid and multicloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md) - [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Deploying self-hosted gateways into the same environments where the backend API
:::image type="content" source="media/self-hosted-gateway-overview/with-gateways.png" alt-text="API traffic flow with self-hosted gateways"::: - ## Packaging The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/shgw/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443.
- Sending metrics to Azure Monitor, if configured to do so - Sending events to Application Insights, if set to do so + ### FQDN dependencies To operate properly, each self-hosted gateway needs outbound connectivity on port 443 to the following endpoints associated with its cloud-based API Management instance:
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 09/19/2022 Last updated : 10/01/2022
compliant with the specific standard.
## Release notes
+### October 2022
+
+- **Function app slots should have remote debugging turned off**
+ - New policy created
+- **App Service app slots should have remote debugging turned off**
+ - New policy created
+- **Function app slots should use latest 'HTTP Version'**
+ - New policy created
+- **Function app slots should use the latest TLS version**
+ - New policy created
+- **App Service app slots should use the latest TLS version**
+ - New policy created
+- **App Service app slots should have resource logs enabled**
+ - New policy created
+- **App Service app slots should enable outbound non-RFC 1918 traffic to Azure Virtual Network**
+ - New policy created
+- **App Service app slots should use managed identity**
+ - New policy created
+- **App Service app slots should use latest 'HTTP Version'**
+ - New policy created
+- Deprecation of policy **Configure App Services to disable public network access**
+ - Replaced by "Configure App Service apps to disable public network access"
+- Deprecation of policy **App Services should disable public network access**
+ - Replaced by "App Service apps should disable public network access" to support _Deny_ effect
+- **App Service apps should disable public network access**
+ - New policy created
+- **App Service app slots should disable public network access**
+ - New policy created
+- **Configure App Service apps to disable public network access**
+ - New policy created
+- **Configure App Service app slots to disable public network access**
+ - New policy created
+- **Function apps should disable public network access**
+ - New policy created
+- **Function app slots should disable public network access**
+ - New policy created
+- **Configure Function apps to disable public network access**
+ - New policy created
+- **Configure Function app slots to disable public network access**
+ - New policy created
+- **Configure App Service app slots to turn off remote debugging**
+ - New policy created
+- **Configure Function app slots to turn off remote debugging**
+ - New policy created
+- **Configure App Service app slots to use the latest TLS version**
+ - New policy created
+- **Configure Function app slots to use the latest TLS version**
+ - New policy created
+- **App Service apps should use latest 'HTTP Version'**
+ - Updated scope to include Windows apps
+- **Function apps should use latest 'HTTP Version'**
+ - Updated scope to include Windows apps
+ ### September 2022 - **App Service apps should be injected into a virtual network**
automation Automation Managed Identity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md
Title: Azure Automation migration to managed identity FAQ
-description: This article gives answers to frequently asked questions when you're migrating from Run As account to managed identity
+description: This article gives answers to frequently asked questions when you're migrating from a Run As account to a managed identity.
#Customer intent: As an implementer, I want answers to various questions.
-# Frequently asked questions when migrating from Run As account to managed identities
+# FAQ for migrating from a Run As account to a managed identity
-This Microsoft FAQ is a list of commonly asked questions when you're migrating from Run As account to Managed Identity. If you have any other questions about the capabilities, go to the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities) and post your questions. When a question is frequently asked, we add it to this article so that it benefits all.
+The following FAQ can help you migrate from a Run As account to a managed identity in Azure Automation. If you have any other questions about the capabilities, post them on the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities). When a question is frequently asked, we add it to this article so that it benefits everyone.
-## How long will you support Run As account?
+## How long will you support a Run As account?
-Automation Run As account will be supported for the next one year until **September 30, 2023**. While we continue to support existing users, we recommend all new users to use Managed identities as the preferred way of runbook authentication. Existing users can still create the Run As account, see the account properties and renew the certificate upon expiration till **January 30, 2023**. After this date, you won't be able to create a Run As account from the Azure portal. You will still be able to create a Run As account through [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell) until the supported time of one year. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate post **January 30, 2023** until **September 30, 2023**. This script will assess automation account which has configured Run As accounts and renews the certificate if the user chooses to do so. On confirmation, it will renew the key credentials of Azure-AD App and upload new self-signed certificate to the Azure-AD App.
+Automation Run As accounts will be supported until *September 30, 2023*. Although we continue to support existing users, we recommend that all new users use managed identities for runbook authentication.
+Existing users can still create a Run As account. You can go to the account properties and renew a certificate upon expiration until *January 30, 2023*. After that date, you won't be able to create a Run As account from the Azure portal.
+
+You'll still be able to create a Run As account through a [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell) until support ends. You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the certificate after *January 30, 2023*, until *September 30, 2023*. This script will assess the Automation account that has configured Run As accounts and renew the certificate if you choose to do so. On confirmation, the script will renew the key credentials of the Azure Active Directory (Azure AD) app and upload new a self-signed certificate to the Azure AD app.
## Will existing runbooks that use the Run As account be able to authenticate?
-Yes, they will be able to authenticate and there will be no impact to the existing runbooks using Run As account.
+Yes, they'll be able to authenticate. There will be no impact to existing runbooks that use a Run As account.
-## How can I renew the existing Run as accounts post January 30, 2023 when portal support to renew the account to removed?
-You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate post January 30, 2023 until September 30, 2023.
+## How can I renew an existing Run As account after January 30, 2023, when portal support to renew the account is removed?
+You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate after January 30, 2023, until September 30, 2023.
-## Can Run As account still be created post September 30, 2023 when Run As account will retire?
-Yes, you can still create the Run As account using the [PowerShell script](../automation/create-run-as-account.md#create-account-using-powershell). However, this would be an unsupported scenario.
+## Can Run As accounts still be created after September 30, 2023, when Run As accounts will retire?
+Yes, you can still create Run As accounts by using the [PowerShell script](../automation/create-run-as-account.md#create-account-using-powershell). However, this will be an unsupported scenario.
-## Can Run As accounts still be renewed post September 30, 2023 when Run As account will retire?
-You can [use this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate post September 30, 2023 when Run As account will retire. However, it would be an unsupported scenario.
+## Can Run As accounts still be renewed after September 30, 2023, when Run As account will retire?
+You can use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/RunAsAccountAssessAndRenew.ps1) to renew the Run As account certificate after September 30, 2023, when Run As accounts will retire. However, it will be an unsupported scenario.
-## Will the runbooks that still use the Run As account be able to authenticate even after September 30, 2023?
+## Will runbooks that still use the Run As account be able to authenticate after September 30, 2023?
Yes, the runbooks will be able to authenticate until the Run As account certificate expires.
-## What is managed identity?
-Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates or keys.
+## What is a managed identity?
+Applications use managed identities in Azure AD when they're connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates, or keys.
For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview). ## What can I do with a managed identity in Automation accounts?
-An Azure Automation managed identity from Azure Active Directory (Azure AD) allows your runbook to access other Azure AD-protected resources easily. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. Key benefits are:
+An Azure Automation managed identity from Azure AD allows your runbook to access other Azure AD-protected resources easily. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets.
+
+Key benefits are:
- You can use managed identities to authenticate to any Azure service that supports Azure AD authentication.-- Managed identities eliminate the management overhead associated with managing Run As account in your runbook code. You can access resources via a managed identity of an Automation account from a runbook without worrying about creating the service principal, Run As Certificate, Run As Connection and so on.-- You donΓÇÖt have to renew the certificate used by the Automation Run As account.
+- Managed identities eliminate the overhead associated with managing Run As accounts in your runbook code. You can access resources via a managed identity of an Automation account from a runbook without worrying about creating the service principal, Run As certificate, Run As connection, and so on.
+- You don't have to renew the certificate that the Automation Run As account uses.
-## Are Managed identities more secure than Run As account?
-Run As account creates an Azure AD app used to manage the resources within the subscription through a certificate having contributor access at the subscription level by default. A malicious user could use this certificate to perform a privileged operation against resources in the subscription leading to potential vulnerabilities. Run As accounts also have a management overhead associated that involves creating a service principal, RunAsCertificate, RunAsConnection, certificate renewal and so on.
+## Are managed identities more secure than a Run As account?
+A Run As account creates an Azure AD app that's used to manage the resources within the subscription through a certificate that has contributor access at the subscription level by default. A malicious user could use this certificate to perform a privileged operation against resources in the subscription, leading to potential vulnerabilities.
-Managed identities eliminate this overhead by providing a secure method for the users to authenticate and access resources that support Azure AD authentication without worrying about any certificate or credential management.
+Run As accounts also have a management overhead that involves creating a service principal, Run As certificate, Run As connection, certificate renewal, and so on. Managed identities eliminate this overhead by providing a secure method for users to authenticate and access resources that support Azure AD authentication without worrying about any certificate or credential management.
-## Can managed identity be used for both cloud and hybrid jobs?
-Azure Automation supports [System-assigned managed identities](/azure/automation/automation-security-overview#managed-identities) for both cloud and Hybrid jobs. Currently, Azure Automation [User-assigned managed identities](/azure/automation/automation-security-overview#managed-identities-preview) can only be used for cloud jobs only and cannot be used for jobs run on a Hybrid Worker.
+## Can a managed identity be used for both cloud and hybrid jobs?
+Azure Automation supports [system-assigned managed identities](/azure/automation/automation-security-overview#managed-identities) for both cloud and hybrid jobs. Currently, Azure Automation [user-assigned managed identities](/azure/automation/automation-security-overview#managed-identities-preview) can be used for cloud jobs only and can't be used for jobs that run on a hybrid worker.
-## Can I use Run as account for new Automation account?
-Yes, only in a scenario when Managed identities aren't supported for specific on-premises resources. We'll allow the creation of Run As account through [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell).
+## Can I use a Run As account for new Automation account?
+Yes, but only in a scenario where managed identities aren't supported for specific on-premises resources. We'll allow the creation of a Run As account through a [PowerShell script](/azure/automation/create-run-as-account#create-account-using-powershell).
-## How can I migrate from existing Run As account to managed identities?
-Follow the steps mentioned in [migrate Run As accounts to Managed identity](/azure/automationmigrate-run-as-accounts-managed-identity).
+## How can I migrate from an existing Run As account to a managed identity?
+Follow the steps in [Migrate an existing Run As account to a managed identity](/azure/automation/migrate-run-as-accounts-managed-identity).
-## How do I see the runbooks that are using Run As account and know what permissions are assigned to the Run As account?
-Use the [script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) here to find out which Automation accounts are using Run As account. If your Azure Automation accounts contain a Run As account, it will by default, have the built-in contributor role assigned to it. You can use this script to check the role assignments of your Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
+## How do I see the runbooks that are using a Run As account and know what permissions are assigned to that account?
+Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it will have the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
## Next steps
-If your question isn't answered here, you can refer to the following sources for more questions and answers.
+If your question isn't answered here, you can refer to the following sources for more questions and answers:
- [Azure Automation](https://docs.microsoft.com/answers/topics/azure-automation.html) - [Feedback forum](https://feedback.azure.com/d365community/forum/721a322e-bd25-ec11-b6e6-000d3a4f0f1c)
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
You've tested and published your runbook, but so far it doesn't do anything usef
1. Select **Overview** and then **Edit** to open the textual editor.
-1. Replace all of the existing code with the following:
+1. Replace the existing code with the following:
```powershell workflow MyFirstRunbook-Workflow
You've tested and published your runbook, but so far it doesn't do anything usef
Disable-AzContextAutosave -Scope Process # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
+ Connect-AzAccount -Identity
# set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
} ``` Edit the `$resourceGroup` variable with a valid value representing your resource group. 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
- 1. From line 9, remove `$AzureContext = (Connect-AzAccount -Identity).context`,
- 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and
+ 1. From line 9, remove `Connect-AzAccount -Identity`,
+ 1. Replace it with `Connect-AzAccount -Identity -AccountId <ClientId>`, and
1. Enter the Client ID you obtained earlier. 1. Select **Save** and then **Test pane**.
You can use the `ForEach -Parallel` construct to process commands for each item
Disable-AzContextAutosave -Scope Process # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
+ Connect-AzAccount -Identity
# set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
# Start or stop VMs in parallel if($action -eq "Start")
You can use the `ForEach -Parallel` construct to process commands for each item
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
- 1. From line 13, remove `$AzureContext = (Connect-AzAccount -Identity).context`,
- 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and
+ 1. From line 9, remove `(Connect-AzAccount -Identity)`,
+ 1. Replace it with `(Connect-AzAccount -Identity -AccountId <ClientId>)`, and
1. Enter the Client ID you obtained earlier. 1. Select **Save**, then **Publish**, and then **Yes** when prompted.
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate run as accounts to managed identity in Azure Automation account
-description: This article describes how to migrate from run as accounts to managed identity.
+ Title: Migrate from a Run As account to a managed identity
+description: This article describes how to migrate from a Run As account to a managed identity in Azure Automation.
Last updated 04/27/2022
-# Migrate from existing Run As accounts to managed identity
+# Migrate from an existing Run As account to a managed identity
> [!IMPORTANT]
-> Azure Automation Run As Account will retire on **September 30, 2023**, and there will be no support provided beyond this date. From now through **September 30, 2023**, you can continue to use the Azure Automation Run As Account. However, we recommend you to transition to [managed identities](/automation-security-overview.md#managed-identities) before **September 30, 2023**.
+> Azure Automation Run As accounts will retire on *September 30, 2023*. Microsoft won't provide support beyond that date. From now through *September 30, 2023*, you can continue to use Azure Automation Run As accounts. However, we recommend that you transition to [managed identities](../automation/automation-security-overview.md#managed-identities) before *September 30, 2023*.
+>
+> For more information about migration cadence and the support timeline for Run As account creation and certificate renewal, see the [frequently asked questions](automation-managed-identity-faq.md).
-See the [frequently asked questions](/automation/automation-managed-identity.md) for more information about migration cadence and support timeline for Run As account creation and certificate renewal.
+Run As accounts in Azure Automation provide authentication for managing resources deployed through Azure Resource Manager or the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate is generated. The certificate is valid for one year. Renewing the certificate every year before it expires keeps the Automation account working but adds overhead.
- Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate will be generated which will be valid for one year. This adds an overhead of renewing the certificate every year before it expires to prevent the Automation account to stop working.
+You can now configure Automation accounts to use a [managed identity](automation-security-overview.md#managed-identities), which is the default option when you create an Automation account. With this feature, an Automation account can authenticate to Azure resources without the need to exchange any credentials. A managed identity removes the overhead of renewing the certificate or managing the service principal.
-Automation accounts can now be configured to use [Managed Identity](/automation/automation-security-overview.md#managed-identities) which is the default option when an Automation account is created. With this feature, Automation account can authenticate to Azure resources without the need to exchange any credentials, hence removing the overhead of renewing the certificate or managing the service principal.
-
-Managed identity can be [system assigned]( /automation/enable-managed-identity-or-automation) or [user assigned](/automation/add-user-assigned-identity). However, when a new Automation account is created, a system assigned managed identity is enabled.
+A managed identity can be [system assigned](enable-managed-identity-for-automation.md) or [user assigned](add-user-assigned-identity.md). When a new Automation account is created, a system-assigned managed identity is enabled.
## Prerequisites
-Ensure the following to migrate from the Run As account to Managed identities:
+Before you migrate from a Run As account to a managed identity:
-1. Create a [system-assigned](enable-managed-identity-for-automation.md) or [user-assigned](add-user-assigned-identity.md), or both types of managed identities. To learn more about the differences between the two types of managed identities, see [Managed Identity Types](/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+1. Create a [system-assigned](enable-managed-identity-for-automation.md) or [user-assigned](add-user-assigned-identity.md) managed identity, or create both types. To learn more about the differences between them, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
> [!NOTE]
- > - User-assigned identities are supported for cloud jobs only. It isn't possible to use the Automation Account's User Managed Identity on a Hybrid Runbook Worker. To use hybrid jobs, you must create a System-assigned identities.
- > - There are two ways to use the Managed Identities in Hybrid Runbook Worker scripts. Either the System-assigned Managed Identity for the Automation account **OR** VM Managed Identity for an Azure VM running as a Hybrid Runbook Worker.
- > - Both the VM's User-assigned Managed Identity or the VM's system assigned Managed Identity will **NOT** work in an Automation account that is configured with an Automation account Managed Identity. When you enable the Automation account Managed Identity, you can only use the Automation Account System-Assigned Managed Identity and not the VM Managed Identity. For more information, see [Use runbook authentication with managed identities](/automation/automation-hrw-run-runbooks?tabs=sa-mi#runbook-auth-managed-identities).
+ > - User-assigned identities are supported for cloud jobs only. It isn't possible to use the Automation account's user-managed identity on a hybrid runbook worker. To use hybrid jobs, you must create system-assigned identities.
+ > - There are two ways to use managed identities in hybrid runbook worker scripts: either the system-assigned managed identity for the Automation account *or* the virtual machine (VM) managed identity for an Azure VM running as a hybrid runbook worker.
+ > - The VM's user-assigned managed identity and the VM's system-assigned managed identity will *not* work in an Automation account that's configured with an Automation account's managed identity. When you enable the Automation account's managed identity, you can use only the Automation account's system-assigned managed identity and not the VM managed identity. For more information, see [Use runbook authentication with managed identities](automation-hrw-run-runbooks.md).
+
+1. Assign the same role to the managed identity to access the Azure resources that match the Run As account. Follow the steps in [Check the role assignment for the Azure Automation Run As account](manage-run-as-account.md#check-role-assignment-for-azure-automation-run-as-account).
-1. Assign same role to the managed identity to access the Azure resources matching the Run As account. Follow the steps in [Check role assignment for Azure Automation Run As account](/automation/manage-run-as-account#check-role-assignment-for-azure-automation-run-as-account).
-Ensure that you don't assign high privilege permissions like Contributor, Owner and so on to Run as account. Follow the RBAC guidelines to limit the permissions from the default Contributor permissions assigned to Run As account using this [script](/azure/automation/manage-runas-account#limit-run-as-account-permissions).
+ Ensure that you don't assign high-privilege permissions like contributor or owner to the Run As account. Follow the role-based access control (RBAC) guidelines to limit the permissions from the default contributor permissions assigned to a Run As account by using [this script](manage-run-as-account.md#limit-run-as-account-permissions).
- For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account needs to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from blob storage. Read more about [Azure Automation security guidelines](/azure/automation/automation-security-guidelines#authentication-certificate-and-identities).
+ For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities).
-## Migrate from Automation Run As account to Managed Identity
+## Migrate from an Automation Run As account to a managed identity
-To migrate from an Automation Run As account to a Managed Identity for your runbook authentication, follow the steps below:
+To migrate from an Automation Run As account to a managed identity for your runbook authentication, follow these steps:
-1. Change the runbook code to use managed identity. We recommend that you test the managed identity to verify if the runbook works as expected by creating a copy of your production runbook to use managed identity. Update your test runbook code to authenticate by using the managed identities. This ensures that you don't override the AzureRunAsConnection in your production runbook and break the existing Automation. After you are sure that the runbook code executes as expected using the Managed Identities, update your production runbook to use managed identities.
+1. Change the runbook code to use a managed identity.
+
+ We recommend that you test the managed identity to verify if the runbook works as expected by creating a copy of your production runbook. Update your test runbook code to authenticate by using the managed identity. This method ensures that you don't override `AzureRunAsConnection` in your production runbook and break the existing Automation instance. After you're sure that the runbook code runs as expected via the managed identity, update your production runbook to use the managed identity.
- For Managed Identity support, use the Az cmdlet Connect-AzAccount cmdlet. use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference.
+ For managed identity support, use the `Connect-AzAccount` cmdlet. To learn more about this cmdlet, see [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount?branch=main&view=azps-8.3.0) in the PowerShell reference.
- - If you are using Az modules, update to the latest version following the steps in the [Update Azure PowerShell modules](automation-update-azure-modules.md#update-az-modules) article.
- - If you are using AzureRM modules, Update `AzureRM.Profile` to latest version and replace using `Add-AzureRMAccount` cmdlet with `Connect-AzureRMAccount ΓÇôIdentity`.
+ - If you're using Az modules, update to the latest version by following the steps in the [Update Azure PowerShell modules](/azure/automation/automation-update-azure-modules?branch=main#update-az-modules) article.
+ - If you're using AzureRM modules, update `AzureRM.Profile` to the latest version and replace it by using the `Add-AzureRMAccount` cmdlet with `Connect-AzureRMAccount ΓÇôIdentity`.
- Follow the sample scripts below to know the change required to the runbook code to use Managed Identities
+ To understand the changes to the runbook code that are required before you can use managed identities, use the [sample scripts](#sample-scripts).
-1. Once you are sure that the runbook is executing successfully by using managed identities, you can safely [delete the Run as account](/azure/automation/delete-run-as-account) if the Run as account is not used by any other runbook.
+1. When you're sure that the runbook is running successfully by using managed identities, you can safely [delete the Run As account](../automation/delete-run-as-account.md) if no other runbook is using that account.
## Sample scripts
-Following are the examples of a runbook that fetches the ARM resources using the Run As Account (Service Principal) and managed identity.
+The following examples of runbook scripts fetch the Resource Manager resources by using the Run As account (service principal) and the managed identity.
# [Run As account](#tab/run-as-account)
Following are the examples of a runbook that fetches the ARM resources using the
$connectionName = "AzureRunAsConnection" try {
- # Get the connection "AzureRunAsConnection "
+ # Get the connection "AzureRunAsConnection"
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName "Logging in to Azure..."
Following are the examples of a runbook that fetches the ARM resources using the
} }
- #Get all ARM resources from all resource groups
+ #Get all Resource Manager resources from all resource groups
$ResourceGroups = Get-AzureRmResourceGroup foreach ($ResourceGroup in $ResourceGroups)
Following are the examples of a runbook that fetches the ARM resources using the
} ```
-# [System-assigned Managed identity](#tab/sa-managed-identity)
+# [System-assigned managed identity](#tab/sa-managed-identity)
>[!NOTE]
-> Enable appropriate RBAC permissions to the system identity of this automation account. Otherwise, the runbook may fail.
+> Enable appropriate RBAC permissions for the system identity of this Automation account. Otherwise, the runbook might fail.
```powershell {
Following are the examples of a runbook that fetches the ARM resources using the
throw $_.Exception }
- #Get all ARM resources from all resource groups
+ #Get all Resource Manager resources from all resource groups
$ResourceGroups = Get-AzResourceGroup foreach ($ResourceGroup in $ResourceGroups)
Following are the examples of a runbook that fetches the ARM resources using the
Write-Output ("") } ```
-# [User-assigned Managed identity](#tab/ua-managed-identity)
+# [User-assigned managed identity](#tab/ua-managed-identity)
```powershell {
catch {
Write-Error -Message $_.Exception throw $_.Exception }
-#Get all ARM resources from all resource groups
+#Get all Resource Manager resources from all resource groups
$ResourceGroups = Get-AzResourceGroup foreach ($ResourceGroup in $ResourceGroups) {
foreach ($ResourceGroup in $ResourceGroups)
## Graphical runbooks
-### How to check if Run As account is used in Graphical Runbooks
+### Check if a Run As account is used in graphical runbooks
-To check if Run As account is used in Graphical Runbooks:
-
-1. Check each of the activities within the runbook to see if they use the Run As Account when calling any logon cmdlets/aliases. For example, `Add-AzRmAccount/Connect-AzRmAccount/Add-AzAccount/Connect-AzAccount`
+1. Check each of the activities within the runbook to see if it uses the Run As account when it calls any logon cmdlets or aliases, such as `Add-AzRmAccount/Connect-AzRmAccount/Add-AzAccount/Connect-AzAccount`.
- :::image type="content" source="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-inline.png" alt-text="Screenshot to check if graphical runbook uses Run As." lightbox="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-expanded.png":::
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-inline.png" alt-text="Screenshot that illustrates checking if a graphical runbook uses a Run As account." lightbox="./media/migrate-run-as-account-managed-identity/check-graphical-runbook-use-run-as-expanded.png":::
-1. Examine the parameters used by the cmdlet.
+1. Examine the parameters that the cmdlet uses.
- :::image type="content" source="./medilet":::
+ :::image type="content" source="./medilet.":::
-1. For use with the Run As account, it will use the *ServicePrinicipalCertificate* parameter set *ApplicationId* and *Certificate Thumbprint* will be from the RunAsAccountConnection.
+ For use with the Run As account, the cmdlet will use the `ServicePrinicipalCertificate` parameter set to `ApplicationId`. `CertificateThumbprint` will be from `RunAsAccountConnection`.
- :::image type="content" source="./media/migrate-run-as-account-managed-identity/parameter-sets-inline.png" alt-text="Screenshot to check the parameter sets." lightbox="./media/migrate-run-as-account-managed-identity/parameter-sets-expanded.png":::
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/parameter-sets-inline.png" alt-text="Screenshot that shows parameter sets." lightbox="./media/migrate-run-as-account-managed-identity/parameter-sets-expanded.png":::
+### Edit a graphical runbook to use a managed identity
-### How to edit graphical Runbook to use managed identity
-
-You must test the managed identity to verify if the Graphical runbook is working as expected by creating a copy of your production runbook to use the managed identity and updating your test graphical runbook code to authenticate by using the managed identity. You can add this functionality to a graphical runbook by adding `Connect-AzAccount` cmdlet.
+You must test the managed identity to verify that the graphical runbook is working as expected. Create a copy of your production runbook to use the managed identity, and then update your test graphical runbook code to authenticate by using the managed identity. You can add this functionality to a graphical runbook by adding the `Connect-AzAccount` cmdlet.
-Listed below is an example to guide on how a graphical runbook that uses Run As account uses managed identities:
+The following steps include an example to show how a graphical runbook that uses a Run As account can use managed identities:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Open the Automation account and select **Process Automation**, **Runbooks**.
-1. Here, select a runbook. For example, select *Start Azure V2 VMs* runbook either from the list and select **Edit** or go to **Browse Gallery** and select *start Azure V2 VMs*.
+1. Open the Automation account, and then select **Process Automation** > **Runbooks**.
+1. Select a runbook. For example, select the **Start Azure V2 VMs** runbook from the list, and then select **Edit**. Or go to **Browse Gallery** and select **Start Azure V2 VMs**.
- :::image type="content" source="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-inline.png" alt-text="Screenshot of edit graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-expanded.png":::
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-inline.png" alt-text="Screenshot of editing a graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-expanded.png":::
-1. Replace, Run As connection that uses `AzureRunAsConnection`and connection asset that internally uses PowerShell `Get-AutomationConnection` cmdlet with `Connect-AzAccount` cmdlet.
+1. Replace the Run As connection that uses `AzureRunAsConnection` and the connection asset that internally uses the PowerShell `Get-AutomationConnection` cmdlet with the `Connect-AzAccount` cmdlet.
-1. Connect to Azure that uses `Connect-AzAccount` to add the identity support for use in the runbook using `Connect-AzAccount` activity from the `Az.Accounts` cmdlet that uses the PowerShell code to connect to identity.
+1. Add identity support for use in the runbook by using the `Connect-AzAccount` activity from the `Az.Accounts` cmdlet that uses the PowerShell code to connect to the managed identity.
- :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-functionality-inline.png" alt-text="Screenshot of add functionality to graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/add-functionality-expanded.png":::
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-functionality-inline.png" alt-text="Screenshot of adding functionality to a graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/add-functionality-expanded.png":::
-1. Select **Code** to enter the following code to pass the identity.
+1. Select **Code**, and then enter the following code to pass the identity:
-```powershell-interactive
-try
-{
- Write-Output ("Logging in to Azure...")
- Connect-AzAccount -Identity
-}
-catch {
- Write-Error -Message $_.Exception
- throw $_.Exception
-}
-```
+ ```powershell-interactive
+ try
+ {
+ Write-Output ("Logging in to Azure...")
+ Connect-AzAccount -Identity
+ }
+ catch {
+ Write-Error -Message $_.Exception
+ throw $_.Exception
+ }
+ ```
-For example, in the runbook `Start Azure V2 VMs` in the runbook gallery, you must replace `Get Run As Connection` and `Connect to Azure` activities with `Connect-AzAccount` cmdlet activity.
+For example, in the runbook **Start Azure V2 VMs** in the runbook gallery, you must replace the `Get Run As Connection` and `Connect to Azure` activities with the `Connect-AzAccount` cmdlet activity.
-For more information, see sample runbook name *AzureAutomationTutorialWithIdentityGraphical* that gets created with the Automation account.
+For more information, see the sample runbook name **AzureAutomationTutorialWithIdentityGraphical** that's created with the Automation account.
## Next steps -- Review the Frequently asked questions for [Migrating to Managed Identities](automation-managed-identity-faq.md).
+- Review the [frequently asked questions for migrating to managed identities](automation-managed-identity-faq.md).
-- If your runbooks aren't completing successfully, review [Troubleshoot Azure Automation managed identity issues](troubleshoot/managed-identity.md).
+- If your runbooks aren't finishing successfully, review [Troubleshoot Azure Automation managed identity issues](troubleshoot/managed-identity.md).
-- Learn more about system assigned managed identity, see [Using a system-assigned managed identity for an Azure Automation account](enable-managed-identity-for-automation.md)
+- To learn more about system-assigned managed identities, see [Using a system-assigned managed identity for an Azure Automation account](enable-managed-identity-for-automation.md).
-- Learn more about user assigned managed identity, see [Using a user-assigned managed identity for an Azure Automation account]( add-user-assigned-identity.md)
+- To learn more about user-assigned managed identities, see [Using a user-assigned managed identity for an Azure Automation account]( add-user-assigned-identity.md).
-- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For information about Azure Automation account security, see [Azure Automation account authentication overview](automation-security-overview.md).
availability-zones Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-storage.md
description: Learn how to migrate your Azure storage accounts to availability zo
Previously updated : 09/21/2022 Last updated : 09/27/2022
A conversion can be accomplished in one of two ways:
#### Customer-initiated conversion (preview) > [!IMPORTANT]
-> Customer-initiated conversion is currently in preview, but is not available in the following regions:
+> Customer-initiated conversion is currently in preview and available in all public ZRS regions except for the following:
> > - (Europe) West Europe > - (Europe) UK South
A conversion can be accomplished in one of two ways:
> - (North America) East US > - (North America) East US 2 >
+> To opt in to the preview, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md) and specify **CustomerInitiatedMigration** as the feature name.
+>
> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
azure-app-configuration Concept Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md
Previously updated : 08/17/2022 Last updated : 09/14/2022
Each key-value is uniquely identified by its key plus a label that can be `\0`.
| Key | Description | ||| | `key` is omitted or `key=*` | Matches all keys. |
-| `key=abc` | Matches key name **abc** exactly. |
-| `key=abc*` | Matches key names that start with **abc**.|
-| `key=abc,xyz` | Matches key names **abc** or **xyz**. Limited to five CSVs. |
+| `key=abc` | Matches key name `abc` exactly. |
+| `key=abc*` | Matches key names that start with `abc`.|
+| `key=abc,xyz` | Matches key names `abc` or `xyz`. Limited to five CSVs. |
You also can include the following label patterns:
You also can include the following label patterns:
||| | `label` is omitted or `label=*` | Matches any label, which includes `\0`. | | `label=%00` | Matches `\0` label. |
-| `label=1.0.0` | Matches label **1.0.0** exactly. |
-| `label=1.0.*` | Matches labels that start with **1.0.**. |
-| `label=%00,1.0.0` | Matches labels `\0` or **1.0.0**, limited to five CSVs. |
+| `label=1.0.0` | Matches label `1.0.0` exactly. |
+| `label=1.0.*` | Matches labels that start with `1.0.`. |
+| `label=%00,1.0.0` | Matches labels `\0` or `1.0.0`, limited to five CSVs. |
> [!NOTE] > `*`, `,`, and `\` are reserved characters in queries. If a reserved character is used in your key names or labels, you must escape it by using `\{Reserved Character}` in queries.
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
Title: Create Azure Functions on Linux using a custom image description: Learn how to create Azure Functions running on a custom Linux image. Previously updated : 06/10/2022 Last updated : 09/28/2022 zone_pivot_groups: programming-languages-set-functions-full
A function app on Azure manages the execution of your functions in your hosting
1. The function can now use this connection string to access the storage account. > [!NOTE]
-> If you publish your custom image to a private container registry, you must use environment variables in the *Dockerfile* for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You must also set the `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD` variables. To use the values, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
+> If you publish your custom image to a private container registry, you must also set the `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD` variables. For more information, see [Custom containers](../app-service/reference-app-settings.md#custom-containers) in the App Service settings reference.
## Verify your functions on Azure
azure-functions Functions Identity Access Azure Sql With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-access-azure-sql-with-managed-identity.md
To enable system-assigned managed identity in the Azure portal:
For information on enabling system-assigned managed identity through Azure CLI or PowerShell, check out more information on [using managed identities with Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet&toc=%2fazure%2fazure-functions%2ftoc.json#add-a-system-assigned-identity).
+> [!TIP]
+> For user-assigned managed identity, switch to the User Assigned tab. Click Add and select a Managed Identity. For more information on creating user-assigned managed identity, see the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
## Grant SQL database access to the managed identity
In the application settings of our Function App the SQL connection string settin
*testdb* is the name of the database we're connecting to and *demo.database.windows.net* is the name of the server we're connecting to.
+>[!TIP]
+>For user-assigned managed identity, use `Server=demo.database.windows.net; Authentication=Active Directory Managed Identity; User Id=ClientIdOfManagedIdentity; Database=testdb`.
+ ## Next steps - [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md) - [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)-- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
+- [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/)
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
The following table shows current supported Java versions for each major version
| Functions version | Java versions (Windows) | Java versions (Linux) | | -- | -- | |
-| 4.x | 11 <br/>8 | 11 <br/>8 |
+| 4.x |17(preview) <br/>11 <br/>8 |17(preview) <br/>11 <br/>8 |
| 3.x | 11 <br/>8 | 11 <br/>8 | | 2.x | 8 | n/a |
You can control the version of Java targeted by the Maven archetype by using the
The Maven archetype generates a pom.xml that targets the specified Java version. The following elements in pom.xml indicate the Java version to use:
-| Element | Java 8 value | Java 11 value | Description |
-| - | - | - | |
-| **`Java.version`** | 1.8 | 11 | Version of Java used by the maven-compiler-plugin. |
-| **`JavaVersion`** | 8 | 11 | Java version hosted by the function app in Azure. |
+| Element | Java 8 value | Java 11 value | Java 17 (preview) value | Description |
+| - | - | - | - | |
+| **`Java.version`** | 1.8 | 11 | 17 | Version of Java used by the maven-compiler-plugin. |
+| **`JavaVersion`** | 8 | 11 | 17 | Java version hosted by the function app in Azure. |
The following examples show the settings for Java 8 in the relevant sections of the pom.xml file:
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
- # Creator for indoor maps This article introduces concepts and tools that apply to Azure Maps Creator. We recommend that you read this article before you begin to use the Azure Maps Creator API and SDK.
Creator services create, store, and use various data types that are defined and
- Converted data - Dataset - Tileset
+- Custom styles
- Feature stateset ## Upload a Drawing package
Azure Maps Creator provides the following services that support map creation:
- [Dataset service](/rest/api/maps/v2/dataset). - [Tileset service](/rest/api/maps/v2/tileset). Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.
+- Custom styles. Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
- [Feature State service](/rest/api/maps/v2/feature-state). Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system. ### Datasets
After a tileset is created, it can be retrieved by the [Render V2 service](#rend
If a tileset becomes outdated and is no longer useful, you can delete the tileset. For information about how to delete tilesets, see [Data maintenance](#data-maintenance). >[!NOTE]
->A tileset is independent of the dataset from which it was created. If you create tilesets from a dataset, and then subsequently update that dataset, the tilesets isn't updated.
+>A tileset is independent of the dataset from which it was created. If you create tilesets from a dataset, and then subsequently update that dataset, the tilesets isn't updated.
> >To reflect changes in a dataset, you must create new tilesets. Similarly, if you delete a tileset, the dataset isn't affected.
+### Custom styling (Preview)
+
+A style defines the visual appearance of a map. It defines what data to draw, the order to draw it in, and how to style the data when drawing it. Azure Maps Creator styles support the MapLibre standard for [style layers][style layers] and [sprites][sprites].
+
+When you convert a drawing package after uploading it to your Azure Maps account, default styles are applied to the elements of your map. The custom styling service enables you to customize the visual appearance of your map. You can do this by manually editing the style JSON and importing it into your Azure Maps account using the [Style - Create][create-style] HTTP request, however the recommended approach is to use the [visual style editor][style editor]. For more information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md).
+
+Example layer in the style.json file:
+
+```json
+{
+ "id": "indoor_unit_gym_label",
+ "type": "symbol",
+ "filter": ["all", ["has","floor0"], ["any", ["==", "categoryName", "room.gym"]]],
+ "layout": {
+ "visibility": "none",
+ "icon-image": "gym",
+ "icon-size": {"stops": [[17.5, 0.7], [21, 1.1]]},
+ "symbol-avoid-edges": true,
+ "symbol-placement": "point",
+ "text-anchor": "top",
+ "text-field": "{name}",
+ "text-font": ["SegoeFrutigerHelveticaMYingHei-Medium"],
+ "text-keep-upright": true,
+ "text-letter-spacing": 0.1,
+ "text-offset": [0, 1.05],
+ "text-size": {"stops": [[18, 5], [18.5, 6.5], [19, 8], [19.5, 9.5], [20, 11]]}
+ },
+ "metadata": {"microsoft.maps:layerGroup": "labels_indoor"},
+ "minzoom": 17.5,
+ "paint": {
+ "text-color": "rgba(0, 0, 0, 1)",
+ "text-halo-blur": 0.5,
+ "text-halo-color": "rgba(255, 255, 255, 1)",
+ "text-halo-width": 1,
+ "text-opacity": ["step", ["zoom"], 0, 18, 1]
+ },
+ "source-layer": "Indoor unit"
+},
+```
+
+| Layer Properties | Description |
+||-|
+| id | The name of the layer |
+| type | The rendering type for this layer.<br/>Some of the more common types include:<br/>**fill**: A filled polygon with an optional stroked border.<br/>**Line**: A stroked line.<br/>**Symbol**: An icon or a text label.<br/>**fill-extrusion**: An extruded (3D) polygon. |
+| filter | Only features that match the filter criteria are displayed. |
+| layout | Layout properties for the layer. |
+| minzoom | A number between 0 and 24 that represents the minimum zoom level for the layer. At zoom levels less than the minzoom, the layer will be hidden. |
+| paint | Default paint properties for this layer. |
+| source-layer | A source supplies the data, from a vector tile source, displayed on a map. Required for vector tile sources; prohibited for all other source types, including GeoJSON sources.|
+
+#### Map configuration
+
+The map configuration is an array of configurations. Each configuration consists of a [basemap][basemap] and one or more layers, each layer consisting of a [style][style] + [tileset][tileset] tuple.
+
+The map configuration is used when you [Instantiate the Indoor Manager][instantiate-indoor-manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration will be created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it will always point to the new map configuration.
+
+Below is an example of a map configuration JSON showing the default configurations. See the table below for a description of each element of the file:
+
+```json
+{
+ "version": 1.0,
+ "description": "This is the default Azure Maps map configuration for facility ontology tilesets.",
+ "defaultConfiguration": "indoor_light",
+ "configurations": [
+ {
+ "name": "indoor_light",
+ "displayName": "Indoor light",
+ "description": "A base style for Azure Maps.",
+ "thumbnail": "indoor_2022-01-01.png",
+ "baseMap": "microsoft_light",
+ "layers": [
+ {
+ "tilesetId": "fa37d225-924e-3f32-8441-6128d9e5519a",
+ "styleId": "microsoft-maps:indoor_2022-01-01"
+ }
+ ]
+ },
+ {
+ "name": "indoor_dark",
+ "displayName": "Indoor dark",
+ "description": "A base style for Azure Maps.",
+ "thumbnail": "indoor_dark_2022-01-01.png",
+ "baseMap": "microsoft_dark",
+ "layers": [
+ {
+ "tilesetId": "fa37d225-924e-3f32-8441-6128d9e5519a",
+ "styleId": "microsoft-maps:indoor_dark_2022-01-01"
+ }
+ ]
+ }
+ ]
+}
+```
+
+| Style Object Properties | Description |
+|-|--|
+| Name | The name of the style. |
+| displayName | The display name of the style. |
+| description | The user defined description of the style. |
+| thumbnail | Use to specify the thumbnail used in the style picker for this style. For more information, see the [style picker control][style-picker-control]. |
+| baseMap | Use to Set the base map style. |
+| layers  | The layers array consists of one or more *tileset + Style* tuples, each being a layer of the map. This enables multiple buildings on a map, each building represented in its own tileset. |
+
+#### Additional information
+
+- For more information how to modify styles using the style editor, see [Create custom styles for indoor maps][style-how-to].
+- For more information on style Rest API, see [style][style] in the Maps Creator Rest API reference.
+- For more information on the map configuration Rest API, see [Creator - map configuration Rest API][map-config-api].
+ ### Feature statesets Feature statesets are collections of dynamic properties (*states*) that are assigned to dataset features, such as rooms or equipment. An example of a *state* can be temperature or occupancy. Each *state* is a key/value pair that contains the name of the property, the value, and the timestamp of the last update.
You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query
### Alias API
-Creator services such as Conversion, Dataset, Tileset, and Feature State return an identifier for each resource that's created from the APIs. The [Alias API](/rest/api/maps/v2/alias) allows you to assign an alias to reference a resource identifier.
+Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API](/rest/api/maps/v2/alias) allows you to assign an alias to reference a resource identifier.
### Indoor Maps module
The following example shows how to update a dataset, create a new tileset, and d
> [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md)+
+> [!div class="nextstepaction"]
+> [Create custom styles for indoor maps](how-to-create-custom-styles.md)
+
+[style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout
+[sprites]: https://docs.mapbox.com/help/glossary/sprite/
+[create-style]: /rest/api/maps/v20220901preview/style/create
+[basemap]: supported-map-styles.md
+[style]: /rest/api/maps/v20220901preview/style
+[tileset]: /rest/api/maps/v20220901preview/tileset
+[style-picker-control]: choose-map-style.md#add-the-style-picker-control
+[style-how-to]: how-to-create-custom-styles.md
+[map-config-api]: /rest/api/maps/v20220901preview/mapconfiguration
+[instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
+[style editor]: https://azure.github.io/Azure-Maps-Style-Editor
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
+
+ Title: Create custom styles for indoor maps
+
+description: Learn how to use Maputnik with Azure Maps Creator to create custom styles for your indoor maps.
++ Last updated : 9/23/2022+++++
+# Create custom styles for indoor maps (preview)
+
+When you create an indoor map using Azure Maps Creator, default styles are applied. This article discusses how to customize these styling elements.
+
+## Prerequisites
+
+- Understanding of [Creator concepts](creator-indoor-maps.md).
+- An Azure Maps Creator [tileset][tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps][tutorial] tutorial helpful.
+
+## Create custom styles using Creators visual editor
+
+While it's possible to modify your indoor maps styles using [Creators Rest API][creator api], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article will focus exclusively on creating custom styles using this style editor.
+
+### Open style
+
+When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, you'll need to open that default style.
+
+Open the [Creator Style Editor][style editor] and select the **Open** toolbar button.
++
+The **Open Style** dialog box opens.
+
+Enter your [subscription key][subscription key] in the **Enter your Azure Maps subscription key** field.
+
+Next, select the geography associated with your subscription key in the drop-down list.
++
+Select the **Get map configuration list** button to get a list of every map configuration associated with the active Creator resource.
++
+> [!NOTE]
+> If the map configuration was created as part of a custom style and has a user provided alias, that alias will appear in the map configuration drop-down list, otherwise the `mapConfigurationId` will appear. The default map configuration ID for any given tileset can be found by using the [tileset Get][tileset get] HTTP request and passing in the tileset ID:
+>
+> ```http
+> https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview
+> ```
+>
+> The `mapConfigurationId` is returned in the body of the response, for example:
+>
+> ```json
+> "defaultMapConfigurationId": "68d74ad9-4f84-99ce-06bb-19f487e8e692"
+> ```
+
+Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples will appear. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`.
+
+Once you've selected the desired style, select the **Load selected style** button.
+
+#### About the open style dialog
++
+| # | Description |
+|||
+| 1 | Your Azure Maps account [subscription key][subscription key] |
+| 2 | Select the geography of the Azure Maps account. |
+| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` will be shown instead. |
+| 4 | This value is created from a combination of the style and tileset. If the style has as alias it will be shown, if not the `styleId` will be shown. The `tilesetId` will always be shown for the tileset value. |
+
+### Modify style
+
+Once your style is open in the visual editor, you can begin to modify the various elements of your indoor map such as changing the background colors of conference rooms, offices or restrooms. You can also change the font size for labels such as office numbers and define what appears at different zoom levels.
+
+#### Change background color
+
+To change the background color for all units in the specified layer, put your mouse pointer over the desired unit and select it using the left mouse button. YouΓÇÖll be presented with a popup menu showing the layers that are associated with the categories the unit is associated with. Once you select the layer that you wish to update the style properties on, youΓÇÖll see that layer ready to be updated in the left pane.
++
+Open the color palette and select the color you wish to change the selected unit to.
++
+#### Base map
+
+The base map drop-down list on the visual editor toolbar presents a list of base map styles that affect the style attributes of the base map that your indoor map is part of. It will not affect the style elements of your indoor map but will enable you to see how your indoor map will look with the various base maps.
++
+#### Save custom styles
+
+Once you have made all of the desired changes to your styles, save the changes to your Creator resource. You can overwrite your style with the changes or create a new style.
+
+To save your changes, select the **Save** button on the toolbar.
++
+The will bring up the **Upload style & map configuration** dialog box:
++
+The following table describes the four fields you're presented with.
+
+| Property | Description |
+|-|-|
+| Style description | A user-defined description for this style. |
+| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style will need to be referenced by the style ID if no alias is provided. |
+| Map configuration description | A user-defined description for this map configuration. |
+| Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration will need to be referenced by the map configuration ID if no alias is provided. |
+
+Some important things to know about aliases:
+
+1. Can be named using alphanumeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_).
+1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since neither the style or map configuration can be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times.
+
+> [!WARNING]
+> Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration](creator-indoor-maps.md#map-configuration) in the concepts article for more information.
+
+Once you have entered values into each required field, select the **Upload map configuration** button to save the style and map configuration data to your Creator resource.
+
+> [!TIP]
+> Make a note of the map configuration `alias` value, it will be required when you [Instantiate the Indoor Manager][Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps.
+
+## Custom categories
+
+Azure Maps Creator has defined a [list of categories][categories]. When you create your [manifest][manifest], you associate each unit in your facility to one of these categories in the [unitProperties][unitProperties] object.
+
+There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for those with hearing impairments.
+
+To do this, enter the desired value in the `categoryName` for the desired `unitName` in the manifest JSON before uploading your drawing package.
++
+Once opened in the visual editor, you'll notice that this category name isn't associated with any layer and has no default styling. In order to apply styling to it, you'll need to create a new layer and add the new category to it.
++
+To create a new layer, select the duplicate button on an existing layer. This creates a copy of the selected layer that you can modify as needed. Next, rename the layer by typing a new name into the **ID** field. For this example, we entered *indoor_unit_room_accessible*.
++
+Once you've created a new layer, you need to associate your new category name with it. This is done by editing the copied layer to remove the existing categories and add the new one.
+
+For example, the JSON might look something like this:
+
+```json
+{
+ "id": "indoor_unit_room_accessible",
+ "type": "fill",
+ "filter": [
+ "all",
+ ["has", "floor0"],
+ [
+ "any",
+ [
+ "case",
+ [
+ "==",
+ [
+ "typeof",
+ ["get", "categoryName"]
+ ],
+ "string"
+ ],
+ [
+ "==",
+ ["get", "categoryName"],
+ "room.accessible.phone"
+ ],
+ false
+ ]
+ ]
+ ],
+ "layout": {"visibility": "visible"},
+ "metadata": {
+ "microsoft.maps:layerGroup": "unit"
+ },
+ "minzoom": 16,
+ "paint": {
+ "fill-antialias": true,
+ "fill-color": [
+ "string",
+ ["feature-state", "color"],
+ "rgba(230, 230, 230, 1)"
+ ],
+ "fill-opacity": 1,
+ "fill-outline-color": "rgba(120, 120, 120, 1)"
+ },
+ "source-layer": "Indoor unit",
+ "source": "{tilesetId}"
+}
+```
+
+Only features that match the filter are displayed on the map. You need to edit the filter to remove any categories that you don't want to appear on the map and add the new category.
+
+For example, the filter JSON might look something like this:
+
+```json
+[
+ "all",
+ ["has", "floor0"],
+ [
+ "any",
+ [
+ "case",
+ [
+ "==",
+ [
+ "typeof",
+ ["get", "categoryName"]
+ ],
+ "string"
+ ],
+ [
+ "==",
+ ["get", "categoryName"],
+ "room.accessible.phone"
+ ],
+ false
+ ]
+ ]
+]
+```
+
+Now when you select that unit in the map, the pop-up menu will have the new layer ID, which if following this example would be `indoor_unit_room_accessible`. Once selected you can make style edits.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Use the Azure Maps Indoor Maps module](how-to-use-indoor-module.md)
+
+[tileset]: /rest/api/maps/v20220901preview/tileset
+[tileset get]: /rest/api/maps/v20220901preview/tileset/get
+[tutorial]: tutorial-creator-indoor-maps.md
+[creator api]: /rest/api/maps-creator/
+[style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+[manifest]: drawing-requirements.md#manifest-file-requirements
+[unitProperties]: drawing-requirements.md#unitproperties
+[categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json
+[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services
+ Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services with custom styles (preview)
description: Learn how to use the Microsoft Azure Maps Indoor Maps module to render maps by embedding the module's JavaScript libraries. Previously updated : 07/13/2021 Last updated : 09/23/2022 -
-# Use the Azure Maps Indoor Maps module
+# Use the Azure Maps Indoor Maps module with custom styles (preview)
+
+The Azure Maps Web SDK includes the *Azure Maps Indoor* module, enabling you to render indoor maps created in Azure Maps Creator services.
-The Azure Maps Web SDK includes the *Azure Maps Indoor* module. The *Azure Maps Indoor* module allows you to render indoor maps created in Azure Maps Creator services.
+When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing the styles of the different elements of your indoor maps using the [Style Rest API](/rest/api/maps/v20220901preview/style), or the [visual style editor](https://azure.github.io/Azure-Maps-Style-Editor/).
## Prerequisites
-1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
-2. [Create a Creator resource](how-to-manage-creator.md)
-3. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-4. Get a `tilesetId` and a `statesetId` by completing the [tutorial for creating Indoor maps](tutorial-creator-indoor-maps.md).
- You'll need to use these identifiers to render indoor maps with the Azure Maps Indoor Maps module.
+- [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)
+- [Azure Maps Creator resource](how-to-manage-creator.md)
+- [Subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
+- [Map configuration][mapConfiguration] alias or ID. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps][tutorial] tutorial helpful.
+
+You'll need the map configuration `alias` (or `mapConfigurationId`) to render indoor maps with custom styles via the Azure Maps Indoor Maps module.
## Embed the Indoor Maps module
To use the globally hosted Azure Content Delivery Network version of the *Azure
Or, you can download the *Azure Maps Indoor* module. The *Azure Maps Indoor* module contains a client library for accessing Azure Maps services. Follow the steps below to install and load the *Indoor* module into your web application.
- 1. Install the [azure-maps-indoor package](https://www.npmjs.com/package/azure-maps-indoor).
+ 1. Install the latest [azure-maps-indoor package](https://www.npmjs.com/package/azure-maps-indoor).
```powershell >npm install azure-maps-indoor
To use the globally hosted Azure Content Delivery Network version of the *Azure
<script src="node_modules/azure-maps-indoor/dist/atlas-indoor.min.js"></script> ```
-## Instantiate the Map object
+## Set the domain and instantiate the Map object
+
+Set the map domain with a prefix matching the location of your Creator resource, `US` or `EU`, for example:
+
+ `atlas.setDomain('us.atlas.microsoft.com');`
-First, create a *Map object*. The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The code below shows you how to instantiate the *Map object*:
+For more information, see [Azure Maps service geographic scope][geos].
+
+Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2022-09-01-preview`.
+
+The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The code below shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set:
```javascript const subscriptionKey = "<Your Azure Maps Primary Subscription Key>";
+const region = "<Your Creator resource region: us or eu>"
+const mapConfiguration = "<map configuration alias or ID>"
+atlas.setDomain(`${region}.atlas.microsoft.com`);
const map = new atlas.Map("map-id", { //use your facility's location
- center: [-122.13203, 47.63645],
- //or, you can use bounds: [# west, # south, # east, # north] and replace # with your map's bounds
- style: "blank",
- view: 'Auto',
- authOptions: {
+ center: [-122.13315, 47.63637],
+ //or, you can use bounds: [# west, # south, # east, # north] and replace # with your Map bounds
+ authOptions: {
authType: 'subscriptionKey', subscriptionKey: subscriptionKey }, zoom: 19,+
+ mapConfiguration: mapConfiguration,
+ styleAPIVersion: '2022-09-01-preview'
}); ``` ## Instantiate the Indoor Manager
-To load the indoor tilesets and map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object* and the corresponding `tilesetId`. If you wish to support [dynamic map styling](indoor-map-dynamic-styling.md), you must pass the `statesetId`. The `statesetId` variable name is case-sensitive. Your code should like the JavaScript below.
+To load the indoor map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object*. If you wish to support [dynamic map styling](indoor-map-dynamic-styling.md), you must pass the `statesetId`. The `statesetId` variable name is case-sensitive. Your code should like the JavaScript below.
-```javascript
-const tilesetId = "<tilesetId>";
+```javascriptf
const statesetId = "<statesetId>"; const indoorManager = new atlas.indoor.IndoorManager(map, {
- tilesetId: tilesetId,
- statesetId: statesetId // Optional
+ statesetId: statesetId // Optional
}); ``` To enable polling of state data you provide, you must provide the `statesetId` and call `indoorManager.setDynamicStyling(true)`. Polling state data lets you dynamically update the state of dynamic properties or *states*. For example, a feature such as room can have a dynamic property (*state*) called `occupancy`. Your application may wish to poll for any *state* changes to reflect the change inside the visual map. The code below shows you how to enable state polling: ```javascript
-const tilesetId = "<tilesetId>";
const statesetId = "<statesetId>"; const indoorManager = new atlas.indoor.IndoorManager(map, {
- tilesetId: tilesetId,
- statesetId: statesetId // Optional
+ statesetId: statesetId // Optional
}); if (statesetId.length > 0) {
if (statesetId.length > 0) {
} ```
-## Geographic Settings (Optional)
-
-This guide assumes that you've created your Creator service in the United States. If so, you can skip this section. However, if your Creator service was created in Europe, add the following code:
-
-```javascript
- indoorManager.setOptions({ geography: 'eu' });.
-```
-
-## Indoor Level Picker Control
+## Indoor level picker control
The *Indoor Level Picker* control allows you to change the level of the rendered map. You can optionally initialize the *Indoor Level Picker* control via the *Indoor Manager*. Here's the code to initialize the level control picker:
const levelControl = new atlas.control.LevelControl({ position: "top-right" });
indoorManager.setOptions({ levelControl }); ```
-## Indoor Events
+## Indoor events
The *Azure Maps Indoor* module supports *Map object* events. The *Map object* event listeners are invoked when a level or facility has changed. If you want to run code when a level or a facility have changed, place your code inside the event listener. The code below shows how event listeners can be added to the *Map object*.
map.events.add("facilitychanged", indoorManager, (eventData) => {
The `eventData` variable holds information about the level or facility that invoked the `levelchanged` or `facilitychanged` event, respectively. When a level changes, the `eventData` object will contain the `facilityId`, the new `levelNumber`, and other metadata. When a facility changes, the `eventData` object will contain the new `facilityId`, the new `levelNumber`, and other metadata.
-## Example: Use the Indoor Maps Module
+## Example: custom styling: consume map configuration in WebSDK (preview)
-This example shows you how to use the *Azure Maps Indoor* module in your web application. Although the example is limited in scope, it covers the basics of what you need to get started using the *Azure Maps Indoor* module. The complete HTML code is below these steps.
+When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing your indoor styles. For more information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md). Creator also offers a [visual style editor][visual style editor].
-1. Use the Azure Content Delivery Network [option](#embed-the-indoor-maps-module) to install the *Azure Maps Indoor* module.
+1. Follow the [Create custom styles for indoor maps](how-to-create-custom-styles.md) how-to article to create your custom styles. Make a note of the map configuration alias after saving your changes.
-2. Create a new HTML file
+2. Use the [Azure Content Delivery Network](#embed-the-indoor-maps-module) option to install the *Azure Maps Indoor* module.
-3. In the HTML header, reference the *Azure Maps Indoor* module JavaScript and style sheet styles.
+3. Create a new HTML file
-4. Initialize a *Map object*. The *Map object* supports the following options:
+4. In the HTML header, reference the *Azure Maps Indoor* module JavaScript and style sheet.
+
+5. Set the map domain with a prefix matching a location of your Creator resource: `atlas.setDomain('us.atlas.microsoft.com');` if your Creator resource has been created in US region, or `atlas.setDomain('eu.atlas.microsoft.com');` if your Creator resource has been created in EU region.
+
+6. Initialize a *Map object*. The *Map object* supports the following options:
- `Subscription key` is your Azure Maps primary subscription key. - `center` defines a latitude and longitude for your indoor map center location. Provide a value for `center` if you don't want to provide a value for `bounds`. Format should appear as `center`: [-122.13315, 47.63637]. - `bounds` is the smallest rectangular shape that encloses the tileset map data. Set a value for `bounds` if you don't want to set a value for `center`. You can find your map bounds by calling the [Tileset List API](/rest/api/maps/v2/tileset/list). The Tileset List API returns the `bbox`, which you can parse and assign to `bounds`. Format should appear as `bounds`: [# west, # south, # east, # north].
- - `style` allows you to set the color of the background. To display a white background, define `style` as "blank".
+ - `mapConfiguration` the ID or alias of the map configuration that defines the custom styles you want to display on the map, use the map configuration ID or alias from step 1.
+ - `style` allows you to set the initial style from your map configuration that will be displayed, if unset, the style matching map configuration's default configuration will be used.
- `zoom` allows you to specify the min and max zoom levels for your map.
+ - `styleAPIVersion`: pass **'2022-09-01-preview'** (which is required while Custom Styling is in public preview)
-5. Next, create the *Indoor Manager* module. Assign the *Azure Maps Indoor* `tilesetId`, and optionally add the `statesetId`.
+7. Next, create the *Indoor Manager* module with *Indoor Level Picker* control instantiated as part of *Indoor Manager* options, optionally set the `statesetId` option.
-6. Instantiate the *Indoor Level Picker* control.
+8. Add *Map object* event listeners.
-7. Add *Map object* event listeners.
+> [!TIP]
+> The map configuration is referenced using the `mapConfigurationId` or `alias` . Each time you edit or change a map configuration, its ID changes but its alias remains the same. It is recommended to reference the map configuration by its alias in your applications. For more information, See [map configuration](creator-indoor-maps.md#map-configuration) in the concepts article.
Your file should now look similar to the HTML below.
Your file should now look similar to the HTML below.
<body> <div id="map-id"></div> <script>
- const subscriptionKey = "<Your Azure Maps Primary Subscription Key>";
- const tilesetId = "<your tilesetId>";
- const statesetId = "<your statesetId>";
+ const subscriptionKey = "<Your Azure Maps Subscription Key>";
+ const mapConfig = "<Your map configuration id or alias>";
+ const statesetId = "<Your statesetId>";
+ const region = "<Your Creator resource region: us or eu>"
+ atlas.setDomain(`${region}.atlas.microsoft.com`);
const map = new atlas.Map("map-id", { //use your facility's location center: [-122.13315, 47.63637], //or, you can use bounds: [# west, # south, # east, # north] and replace # with your Map bounds
- style: "blank",
- view: 'Auto',
authOptions: { authType: 'subscriptionKey', subscriptionKey: subscriptionKey }, zoom: 19,+
+ mapConfiguration: mapConfig,
+ styleAPIVersion: '2022-09-01-preview'
}); const levelControl = new atlas.control.LevelControl({
Your file should now look similar to the HTML below.
const indoorManager = new atlas.indoor.IndoorManager(map, { levelControl: levelControl, //level picker
- tilesetId: tilesetId,
statesetId: statesetId // Optional });
Learn more about how to add more data to your map:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)+
+[mapConfiguration]: /rest/api/maps/v20220901preview/mapconfiguration
+[tutorial]: tutorial-creator-indoor-maps.md
+[geos]: geographic-scope.md
+[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor/
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial describes how to create indoor maps for use in Microsoft Azure Map
> * Convert your Drawing package into map data. > * Create a dataset from your map data. > * Create a tileset from the data in your dataset.
+> * Get the default map configuration ID from your tileset.
In the next tutorials in the Creator series you'll learn to:
To create a tileset:
5. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset). The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above: ```http
- https://us.atlas.microsoft.com/tilesets?api-version=2.0&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ https://us.atlas.microsoft.com/tilesets?api-version=v20220901preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
``` 6. Select **Send**.
To check the status of the tileset creation process and retrieve the `tilesetId`
:::image type="content" source="./media/tutorial-creator-indoor-maps/tileset-id.png" alt-text="A screenshot of Postman highlighting the tileset ID that is part of the value of the resource location URL in the responses header.":::
+## The map configuration (preview)
+
+Once your tileset creation completes, you can get the `mapConfigurationId` using the [tileset get](/rest/api/maps/v20220901preview/tileset/get) HTTP request:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET mapConfigurationId from Tileset*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL to the [Tileset API](/rest/api/maps/v20220901preview/tileset), passing in the tileset ID you obtained in the previous step.
+
+ ```http
+ https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
+ ```
+
+6. Select **Send**.
+
+7. The tileset JSON will appear in the body of the response, scroll down to see the `mapConfigurationId`:
+
+ ```json
+ "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396"
+ ```
+
+For more information, see [Map configuration](creator-indoor-maps.md#map-configuration) in the indoor maps concepts article.
+
+<!--For additional information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md).-->
+ ## Additional information * For additional information see the how to [Use the Azure Maps Indoor Maps module](how-to-use-indoor-module.md) article.
To learn how to query Azure Maps Creator [datasets](/rest/api/maps/v2/dataset) u
> [!div class="nextstepaction"] > [Tutorial: Query datasets with WFS API](tutorial-creator-wfs.md)+
+> [!div class="nextstepaction"]
+> [Create custom styles for indoor maps](how-to-create-custom-styles.md)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine | <sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br>
- <sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
+ <sup>2</sup> Azure Monitor Linux Agent versions 1.15.2 and higher support syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
+
+ >[!NOTE]
+ >On rsyslog-based systems, Azure Monitor Linux Agent adds forwarding rules to the default ruleset defined in the rsyslog configuration. If multiple rulesets are used, inputs bound to non-default ruleset(s) are **not** forwarded to Azure Monitor Agent. For more information about multiple rulesets in rsyslog, see the [official documentation](https://www.rsyslog.com/doc/master/concepts/multi_ruleset.html).
## Supported services and features
The following tables list the operating systems that Azure Monitor Agent and the
| Red Hat Enterprise Linux Server 6.7+ | | X | X | | Rocky Linux 8 | X | X | | | SUSE Linux Enterprise Server 15 SP4 | X<sup>3</sup> | | |
+| SUSE Linux Enterprise Server 15 SP3 | X | | |
| SUSE Linux Enterprise Server 15 SP2 | X | | | | SUSE Linux Enterprise Server 15 SP1 | X | X | | | SUSE Linux Enterprise Server 15 | X | X | |
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Here's how AMA collects syslog events:
3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'. 3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible: 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
+ 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs will not be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` will not be forward.
2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user). 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively. 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
The [log trace](asp-net-trace-logs.md) is associated with other telemetry to giv
Application Insights provides other features including, but not limited to: -- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment -- [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time -- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create GitHub or Azure DevOps work items in context of Application Insights data -- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application
+- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment
+- [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time
+- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](https://learn.microsoft.com/training/paths/github-administration-products/) or [Azure DevOps](https://learn.microsoft.com/azure/devops/?view=azure-devops) work items in context of Application Insights data
+- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application
- [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](https://learn.microsoft.com/azure/architecture/guide/architecture-styles/microservices).
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
ApplicationInsightsAgent_EXTENSION_VERSION -> ~3
APPLICATIONINSIGHTS_ENABLE_AGENT: true ```
+### Troubleshooting
+
+* Sometimes the latest version of the Application Insights Java agent is not
+ available in Azure Function - it takes a few months for the latest versions to
+ roll out to all regions. In case you need the latest version of Java agent to
+ monitor your app in Azure Function to use a specific version of Application
+ Insights Java Auto-instrumentation Agent, you can upload the agent manually:
+
+ Please follow this [instruction](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent).
+ ## Distributed tracing for Python Function apps To collect custom telemetry from services such as Redis, Memcached, MongoDB, and more, you can use the [OpenCensus Python Extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
This article describes how to set up autoscale for a web app by using a custom m
Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article, we'll show you how to set up autoscale for a web app by using one of the Application Insights metrics to scale the web app in and out.
+> [!NOTE]
+> Autoscaling on custom metrics in Application Insights is supported only for metrics published to **Standard** and **Azure.ApplicationInsights** namespaces. If any other namespaces are used for custom metrics in Application Insights, it will return **Unsupported Metric** error.
+ Azure Monitor autoscale applies to: + [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)
To learn more about autoscale, see the following articles:
- [Overview of autoscale](./autoscale-overview.md) - [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) - [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md)-- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
+- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
If you find that you have excessive billable data for a particular data type, th
```kusto SecurityEvent | summarize AggregatedValue = count() by EventID
+| order by AggregatedValue desc nulls last
``` **Log Management** solution
SecurityEvent
Usage | where Solution == "LogManagement" and iff(isnotnull(toint(IsBillable)), IsBillable == true, IsBillable == "true") == true | summarize AggregatedValue = count() by DataType
+| order by AggregatedValue desc nulls last
``` **Perf** data type
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
By default, all tables in your Log Analytics are Analytics tables, and available
You can currently configure the following tables for Basic Logs: - All tables created with the [Data Collection Rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) -- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2), which [Container Insights](../containers/container-insights-overview.md) uses and which include verbose text-based log records.-- [AppTraces](/azure/azure-monitor/reference/tables/apptraces), which contains freeform log records for application traces in Application Insights.
+- [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) -- Used in [Container Insights](../containers/container-insights-overview.md) and includes verbose text-based log records.
+- [AppTraces](/azure/azure-monitor/reference/tables/apptraces) -- A freeform Application Insights traces.
+- [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/ContainerAppConsoleLogs) -- Logs generated by Container Apps, within a Container App Environment.
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/26/2022 Last updated : 09/28/2022 # Guidelines for Azure NetApp Files network planning
The following table describes whatΓÇÖs supported for each network features confi
| Connectivity to [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) | No | No | | Azure policies (for example, custom naming policies) on the Azure NetApp Files interface | No | No | | Load balancers for Azure NetApp Files traffic | No | No |
-| Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) |
+| Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) |
> [!IMPORTANT]
-> Conversion between Basic and Standard networking features in either direction is not currently supported.
+> Conversion between Basic and Standard networking features in either direction is not currently supported. Additionally, you cannot create a Standard volume from the snapshot of a Basic volume.
### Supported network topologies
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
Two settings are available for network features:
* The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you cannot create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available.
-* You cannot create a standard volume from the snapshot of a basic volume.
+* You cannot create a Standard volume from the snapshot of a Basic volume.
* Conversion between Basic and Standard networking features in either direction is not currently supported.
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 07/08/2022 Last updated : 09/28/2022 # File functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
### Remarks
-Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
This function requires **Bicep version 0.4.412 or later**.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
| jsonPath | No | string | JSONPath expression to take only a part of the JSON into ARM. | | encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. | ### Remarks
-Use this function when you have JSON content or minified JSON content that is stored in a separate file. Rather than duplicating the JSON content in your Bicep file, load the content with this function. You can load a part of a JSON file by specifying a JSON path. The file is loaded when the Bicep file is compiled to the JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have JSON content or minified JSON content that is stored in a separate file. Rather than duplicating the JSON content in your Bicep file, load the content with this function. You can load a part of a JSON file by specifying a JSON path. The file is loaded when the Bicep file is compiled to the JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. The path is relative to the deployed Bicep file, and it should be a compile-time constant (cannot use variables). |
| encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. | ### Remarks
-Use this function when you have content that is more stored in a separate file. Rather than duplicating the content in your Bicep file, load the content with this function. For example, you can load a deployment script from a file. The file is loaded when the Bicep file is compiled to the JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+Use this function when you have content that is more stored in a separate file. Rather than duplicating the content in your Bicep file, load the content with this function. For example, you can load a deployment script from a file. The file is loaded when the Bicep file is compiled to the JSON template. Hence variables cannot be used in filePath as they are not resolved at this stage. During deployment, the JSON template contains the contents of the file as a hard-coded string.
Use the [`loadJsonContent()`](#loadjsoncontent) function to load JSON files.
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
Title: Decompile ARM template JSON to Bicep description: Describes commands for decompiling Azure Resource Manager templates to Bicep files. Previously updated : 07/18/2022 Last updated : 09/28/2022
The decompiled file works, but it has some names that you might want to change.
var uniqueStorageName = 'store${uniqueString(resourceGroup().id)}' ```
-The resource has a symbolic name that you might want to change. Instead of `storageAccountName` for the symbolic name, use `exampleStorage`.
-
-```bicep
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
-```
+To rename across the file, right-click the name, and then select **Rename symbol**. You can also use the **F2** hotkey.
-Since you changed the name of the variable for the storage account name, you need to change where it's used.
+The resource has a symbolic name that you might want to change. Instead of `storageAccountName` for the symbolic name, use `exampleStorage`.
```bicep resource exampleStorage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
- name: uniqueStorageName
-```
-
-And in the output, use:
-
-```bicep
-output storageAccountName string = uniqueStorageName
``` The complete file is:
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/outputs.md
Title: Outputs in Bicep description: Describes how to define output values in Bicep Previously updated : 09/16/2022 Last updated : 09/28/2022 # Outputs in Bicep
-This article describes how to define output values in a Bicep file. You use outputs when you need to return values from the deployed resources.
+This article describes how to define output values in a Bicep file. You use outputs when you need to return values from the deployed resources. You are limited to 64 outputs in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
## Define output values
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
description: Describes how to define parameters in a Bicep file.
Previously updated : 09/06/2022 Last updated : 09/28/2022 # Parameters in Bicep
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
-You are limited to 256 parameters. For more information, see [Template limits](../templates/best-practices.md#template-limits).
+You are limited to 256 parameters in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
For parameter best practices, see [Parameters](./best-practices.md#parameters).
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-declaration.md
Title: Declare resources in Bicep description: Describes how to declare resources to deploy in Bicep. Previously updated : 02/04/2022 Last updated : 09/28/2022 # Resource declaration in Bicep
-This article describes the syntax you use to add a resource to your Bicep file.
+This article describes the syntax you use to add a resource to your Bicep file. You are limited to 800 resources in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
## Declaration
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/variables.md
description: Describes how to define variables in Bicep
Previously updated : 11/12/2021 Last updated : 09/28/2022 # Variables in Bicep
This article describes how to define and use variables in your Bicep file. You u
Resource Manager resolves variables before starting the deployment operations. Wherever the variable is used in the Bicep file, Resource Manager replaces it with the resolved value.
+You are limited to 256 variables in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
+ ## Define variable The syntax for defining a variable is:
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 06/17/2022 Last updated : 09/28/2022 # Naming rules and restrictions for Azure resources
This article summarizes naming rules and restrictions for Azure resources. For r
This article lists resources by resource provider namespace. For a list of how resource providers match Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md).
-Resource names are case-insensitive unless noted in the valid characters column.
- > [!NOTE]
-> When retrieving resource names using various APIs, returned values may display different case values than what is listed in the valid characters table.
+> Resource and resource group names are case-insensitive unless specifically noted in the valid characters column.
+>
+> When using various APIs to retrieve the name for a resource or resource group, the returned value may have different casing than what you originally specified for the name. The returned value may even display different case values than what is listed in the valid characters table.
+>
+> Always perform a case-insensitive comparison of names.
In the following tables, the term alphanumeric refers to:
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/outputs.md
Title: Outputs in templates description: Describes how to define output values in an Azure Resource Manager template (ARM template). Previously updated : 09/16/2022 Last updated : 09/28/2022
The format of each output value must resolve to one of the [data types](data-typ
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [outputs](../bicep/outputs.md).
+You are limited to 64 outputs in a template. For more information, see [Template limits](./best-practices.md#template-limits).
+ ## Define output values The following example shows how to return a property from a deployed resource.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameters.md
Title: Parameters in templates description: Describes how to define parameters in an Azure Resource Manager template (ARM template). Previously updated : 09/06/2022 Last updated : 09/28/2022 # Parameters in ARM templates
Each parameter must be set to one of the [data types](data-types.md).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameters](../bicep/parameters.md).
-You are limited to 256 parameters. For more information, see [Template limits](./best-practices.md#template-limits).
+You are limited to 256 parameters in a template. For more information, see [Template limits](./best-practices.md#template-limits).
-For parameter best practices, see [Parameters](./best-practices.md#parameters).
+For parameter best practices, see [Parameters](./best-practices.md#parameters).
## Minimal declaration
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-declaration.md
Title: Declare resources in templates description: Describes how to declare resources to deploy in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 09/28/2022 # Resource declaration in ARM templates
To deploy a resource through an Azure Resource Manager template (ARM template),
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource declaration](../bicep/resource-declaration.md).
+You are limited to 800 resources in a template. For more information, see [Template limits](./best-practices.md#template-limits).
+ ## Set resource type and version When adding a resource to your template, start by setting the resource type and API version. These values determine the other properties that are available for the resource.
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 07/18/2022 Last updated : 09/28/2022 # Understand the structure and syntax of ARM templates
Each element has properties you can set. This article describes the sections of
## Parameters
-In the `parameters` section of the template, you specify which values you can input when deploying the resources. You're limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple properties.
+In the `parameters` section of the template, you specify which values you can input when deploying the resources. You're limited to [256 parameters](../management/azure-subscription-service-limits.md#general-limits) in a template. You can reduce the number of parameters by using objects that contain multiple properties.
The available properties for a parameter are:
In Bicep, see [parameters](../bicep/file.md#parameters).
## Variables
-In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](data-types.md).
+In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](data-types.md). You are limited to [256 variables](../management/azure-subscription-service-limits.md#general-limits) in a template.
The following example shows the available options for defining a variable:
In Bicep, user-defined functions aren't supported. Bicep does support a variety
## Resources
-In the `resources` section, you define the resources that are deployed or updated.
+In the `resources` section, you define the resources that are deployed or updated. You are limited to [800 resources](../management/azure-subscription-service-limits.md#general-limits) in a template.
You define resources with the following structure:
In Bicep, see [resources](../bicep/file.md#resources).
## Outputs
-In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed.
+In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed. You are limited to [64 outputs](../management/azure-subscription-service-limits.md#general-limits) in a template.
The following example shows the structure of an output definition:
azure-resource-manager Template Tutorial Create Multiple Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md
Title: Create multiple resource instances description: Learn how to create an Azure Resource Manager template (ARM template) to create multiple Azure resource instances. Previously updated : 04/23/2020 Last updated : 09/28/2022
To complete this article, you need:
``` 1. Select **Open** to open the file.
-1. There is a `Microsoft.Storage/storageAccounts` resource defined in the template. Compare the template to the [template reference](/azure/templates/Microsoft.Storage/storageAccounts). It's helpful to get some basic understanding of the template before customizing it.
+1. There's a `Microsoft.Storage/storageAccounts` resource defined in the template. Compare the template to the [template reference](/azure/templates/Microsoft.Storage/storageAccounts). It's helpful to get some basic understanding of the template before customizing it.
1. Select **File** > **Save As** to save the file as _azuredeploy.json_ to your local computer. ## Edit the template
From Visual Studio Code, make the following four changes:
![Azure Resource Manager creates multiple instances](./media/template-tutorial-create-multiple-instances/resource-manager-template-create-multiple-instances.png) 1. Add a `copy` element to the storage account resource definition. In the `copy` element, you specify the number of iterations and a variable for this loop. The count value must be a positive integer and can't exceed 800.
-2. The `copyIndex()` function returns the current iteration in the loop. You use the index as the name prefix. `copyIndex()` is zero-based. To offset the index value, you can pass a value in the `copyIndex()` function. For example, `copyIndex(1)`.
-3. Delete the `variables` element, because it's not used anymore.
-4. Delete the `outputs` element. It's no longer needed.
+
+ ```json
+ "copy": {
+ "name": "storageCopy",
+ "count": 3
+ },
+ ```
+
+1. The `copyIndex()` function returns the current iteration in the loop. You use the index as the name prefix. `copyIndex()` is zero-based. To offset the index value, you can pass a value in the `copyIndex()` function. For example, `copyIndex(1)`.
+
+ ```json
+ "name": "[format('{0}storage{1}', copyIndex(), uniqueString(resourceGroup().id))]",
+ ```
+
+ ```
+
+1. Delete the `storageAccountName` parameter definition, because it's not used anymore.
+1. Delete the `outputs` element. It's no longer needed.
+1. Delete the `metadata` element.
The completed template looks like:
The completed template looks like:
"type": "string", "defaultValue": "Standard_LRS", "allowedValues": [
- "Standard_LRS",
+ "Premium_LRS",
+ "Premium_ZRS",
"Standard_GRS",
- "Standard_ZRS",
- "Premium_LRS"
+ "Standard_GZRS",
+ "Standard_LRS",
+ "Standard_RAGRS",
+ "Standard_RAGZRS",
+ "Standard_ZRS"
], "metadata": { "description": "Storage Account type"
The completed template looks like:
"type": "string", "defaultValue": "[resourceGroup().location]", "metadata": {
- "description": "Location for all resources."
+ "description": "Location for the storage account."
} } }, "resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(),'storage', uniqueString(resourceGroup().id))]",
+ "apiVersion": "2021-06-01",
+ "name": "[format('{0}storage{1}', copyIndex(), uniqueString(resourceGroup().id))]",
"location": "[parameters('location')]", "sku": { "name": "[parameters('storageAccountType')]" }, "kind": "StorageV2", "copy": {
- "name": "storagecopy",
+ "name": "storageCopy",
"count": 3 }, "properties": {}
The completed template looks like:
} ```
+Save the changes.
+ For more information about creating multiple instances, see [Resource iteration in ARM templates](./copy-resources.md) ## Deploy the template
For more information about creating multiple instances, see [Resource iteration
-After a successful template deployment you can display the three storage accounts created in the specified resource group. Compare the storage account names with the name definition in the template.
+After a successful template deployment, you can display the three storage accounts created in the specified resource group. Compare the storage account names with the name definition in the template.
# [CLI](#tab/azure-cli)
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
na Previously updated : 07/19/2022 Last updated : 09/28/2022
The deployment script adds a certificate to the key vault. Configure the key vau
read upn && echo "Enter the user-assigned managed identity ID:" && read identityId &&
- adUserId=$((az ad user show --id jgao@microsoft.com) | jq -r '.id') &&
+ adUserId=$((az ad user show --id ${upn}) | jq -r '.id') &&
resourceGroupName="${projectName}rg" && keyVaultName="${projectName}kv" && az group create --name $resourceGroupName --location $location &&
When the Azure resources are no longer needed, clean up the resources you deploy
1. From the Azure portal, select **Resource group** from the left menu. 2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name. You will see a total of six resources in the resource group.
+3. Select the resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/variables.md
Title: Variables in templates description: Describes how to define variables in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 09/28/2022 # Variables in ARM templates
Resource Manager resolves variables before starting the deployment operations. W
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [variables](../bicep/variables.md).
+You are limited to 256 variables in a template. For more information, see [Template limits](./best-practices.md#template-limits).
+ ## Define variable When defining a variable, you don't specify a [data type](data-types.md) for the variable. Instead provide a value or template expression. The variable type is inferred from the resolved value. The following example sets a variable to a string.
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
audioEffects: [{
## How to index audio effects
-In order to set the index process to include the detection of audio effects, the user should chose one of the **Advanced** presets under **Video + audio indexing** menu as can be seen below.
+In order to set the index process to include the detection of audio effects, select one of the **Advanced** presets under **Video + audio indexing** menu as can be seen below.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/index-audio-effect.png" alt-text="Index Audio Effects image":::
Audio Effects in closed captions file will be retrieved with the following logic
## Adding audio effects in closed caption files
-Audio effects can be added to the closed captions files supported by Azure Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai portal experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
+Audio effects can be added to the closed captions files supported by Azure Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai website experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/close-caption.jpg" alt-text="Audio Effects in CC":::
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
# Enable and view a clapperboard with extracted metadata (preview)
-The clapperboard insight is used to detect clapper board instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, etc. A [clapperboard](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
+A clapperboard insight is used to detect clapperboard instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapperboard](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
-When the movie is being edited, the slate is removed from the scene but a metadata with what's on the clapper board is important. Azure Video Indexer extracts the data from clapperboards, preserves and presents the metadata as described in this article.
+When the movie is being edited, a clapperboard is removed from the scene; however, the information that was written on the clapperboard is important. Azure Video Indexer extracts the data from clapperboards, preserves, and presents the metadata.
+
+This article shows how to enable the post-production insight and view clapperboard instances with extracted metadata.
## View the insight
After the file has been uploaded and indexed, if you want to view the timeline o
### Clapperboards
-Clapperboards contain titles, like: *production*, *roll*, *scene*, *take* and values associated with each title.
-
-The titles and their values' quality may not always be recognizable. For more information, see [limitations](#clapperboard-limitations).
+Clapperboards contain fields with titles (for example, *production*, *roll*, *scene*, *take*) and values (content) associated with each title.
For example, take this clapperboard:
In the following example the board contains the following fields:
|date|FILTER (in this case the board contains no date)| |director|John| |production|Prod name|
-|scene|FPS|
+|scene|1|
|take|99| #### View the insight
To see the instances on the website, select **Insights** and scroll to **Clapper
If you checked the **Post-production** insight, You can also find the clapperboard instance and its timeline (includes time, fields' values) on the **Timeline** tab.
-#### Vew JSON
+#### View JSON
To display the JSON file:
The following table describes fields found in json:
## Clapperboard limitations
+The values may not always be correctly identified by the detection algorithm. Here are some limitations:
+ - The titles of the fields appearing on the clapper board are optimized to identify the most popular fields appearing on top of clapper boards. - Handwritten text or digital digits may not be correctly identified by the fields detection algorithm.-- The algorithm is optimized to identify fields categories that appear horizontally. -- The clapper board may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye.
+- The algorithm is optimized to identify fields' categories that appear horizontally.
+- The clapperboard may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye.
- Empty fieldsΓÇÖ values may lead to to wrong fields categories. <!-- If a part of a clapper board is hidden a value with the highest confidence is shown. -->
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
The following image shows the first flow:
Select **Save** (at the top of the page).
- ![Screenshot of the create SAS URI by path logic.](./media/logic-apps-connector-arm-accounts/create-sas.png)
-
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/logic-apps-connector-arm-accounts/create-sas.png" alt-text="Screenshot of the create SAS URI by path logic." lightbox="./media/logic-apps-connector-arm-accounts/create-sas.png":::
+
Select **+New Step**. 1. Generate an access token.
The following image shows the first flow:
1. Back in your Logic App, create an **Upload video and index** action. 1. Select **Video Indexer(V2)**.
- 1. From Video Indexer(V2) chose **Upload Video and index**.
+ 1. From Video Indexer(V2), select **Upload Video and index**.
1. Set the connection to the Video Indexer account. |Key| Value|
Create the second flow, Logic Apps of type consumption. The second flow is t
1. Get Video Indexer insights. 1. Search for "Video Indexer".
- 1. From **Video Indexer(V2)** chose **Get Video Index** action.
+ 1. From **Video Indexer(V2)**, select the **Get Video Index** action.
Set the connection name:
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monit
| Category | Display Name | Additional information | |:|:-|| | VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. |
+| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing jobs and Re-indexing when needed. |
<!-- --**END Examples** - -->
NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays
| Table | Description | Additional information | |:|:-|| | [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using Azure Video Indexer [portal](https://aka.ms/VIportal) or [REST API](https://aka.ms/vi-dev-portal). | |
+|VIIndexing| Events produced using Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [Re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link --> <!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type | | etc. | | |
The following schemas are in use by Azure Video Indexer
<!-- List the schema and their usage. This can be for resource logs, alerts, event hub formats, etc depending on what you think is important. -->
+#### Audit schema
+ ```json { "time": "2022-03-22T10:59:39.5596929Z",
The following schemas are in use by Azure Video Indexer
} ```
+#### Indexing schema
+
+```json
+{
+ "time": "2022-09-28T09:41:08.6216252Z",
+ "resourceId": "/SUBSCRIPTIONS/{SubscriptionId}/RESOURCEGROUPS/{ResourceGroup}/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/MY-VI-ACCOUNT",
+ "operationName": "UploadStarted",
+ "category": "IndexingLogs",
+ "correlationId": "5cc9a3ea-126b-4f53-a4b5-24b1a5fb9736",
+ "resultType": "Success",
+ "location": "eastus",
+ "operationVersion": "2.0",
+ "durationMs": "0",
+ "identity": {
+ "upn": "my-email@microsoft.com",
+ "claims": null
+ },
+ "properties": {
+ "accountName": "my-vi-account",
+ "accountId": "6961331d-16d3-413a-8f90-f86a5cabf3ef",
+ "videoId": "46b91bc012",
+ "indexing": {
+ "Language": "en-US",
+ "Privacy": "Private",
+ "Partition": null,
+ "PersonModelId": null,
+ "LinguisticModelId": null,
+ "AssetId": null,
+ "IndexingPreset": "Default",
+ "StreamingPreset": "Default",
+ "Description": null,
+ "Priority": null,
+ "ExternalId": null,
+ "Filename": "1 Second Video 1.mp4",
+ "AnimationModelId": null,
+ "BrandsCategories": null
+ }
+ }
+}
+ ```
++ ## Next steps <!-- replace below with the proper link to your main monitoring service article -->
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
See [Create diagnostic setting to collect platform logs and metrics in Azure](/a
| Category | Description | |:|:|
-|Audit | Read/Write operations|
+|Audit | Read/Write operations|
+|Indexing Logs| Monitor the indexing process form upload to indexing and Re-indexing when needed|
:::image type="content" source="./media/monitor/toc-diagnostics-save.png" alt-text="Screenshot of diagnostic settings." lightbox="./media/monitor/toc-diagnostics-save.png":::
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
### Sample Kusto queries
+#### Audit related sample queries
<!-- REQUIRED if you support logs. Please keep headings in this order --> <!-- Add sample Log Analytics Kusto queries for your service. -->
VIAudit
| project TimeGenerated, OperationName, ErrorMessage, ErrorType, CorrelationId, _ResourceId ```
+#### Indexing realted sample queries
+
+```kusto
+// Display Video Indexer Account logs of all failed indexing operations.
+VIIndexing
+// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
+| where Status == "Failure"
+| summarize count() by bin(TimeGenerated, 1d)
+| render columnchart
+```
+
+```kusto
+// Video Indexer top 10 users by operations
+// Render timechart of top 10 users by operations, with an optional account id for filtering.
+// Trend of top 10 active Upn's
+VIIndexing
+// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
+| where OperationName in ("IndexingStarted", "ReindexingStarted")
+| summarize count() by Upn
+| top 10 by count_ desc
+| project Upn
+| join (VIIndexing
+| where TimeGenerated > ago(30d)
+| where OperationName in ("IndexingStarted", "ReindexingStarted")
+| summarize count() by Upn, bin(TimeGenerated,1d)) on Upn
+| project TimeGenerated, Upn, count_
+| render timechart
+```
+ ## Alerts <!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know.
azure-video-indexer Textless Slate Scene Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/textless-slate-scene-matching.md
In order to set the indexing process to include the slate metadata, select the *
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
- ### Insight This insight can only be viewed in the form of the downloaded json file.
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
You can use the Editor widget to create new projects and manage a video's insigh
## Embed videos
-This section discusses embedding videos by [using the portal](#the-portal-experience) or by [assembling the URL manually](#assemble-the-url-manually) into apps.
+This section discusses embedding videos by [using the website](#the-website-experience) or by [assembling the URL manually](#assemble-the-url-manually) into apps.
The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter. For example: `https://www.videoindexer.ai/accounts/00000000-0000-0000-0000-000000000000/videos/b2b2c74b8e/?location=trial`.
-### The portal experience
+### The website experience
-To embed a video, use the portal as described below:
+To embed a video, use the website as described below:
1. Sign in to the [Azure Video Indexer](https://www.videoindexer.ai/) website. 1. Select the video that you want to work with and press **Play**.
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
Title: Attach Azure disk pools to Azure VMware Solution hosts (Preview)
+ Title: Attach Azure disk pools to Azure VMware Solution hosts
description: Learn how to attach an Azure disk pool surfaced through an iSCSI target as the VMware vSphere datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and consume them from your Azure VMware Solution private cloud.
ms.devlang: azurecli
-# Attach disk pools to Azure VMware Solution hosts (Preview)
+# Attach disk pools to Azure VMware Solution hosts
[Azure disk pools](../virtual-machines/disks-pools.md) offer persistent block storage to applications and workloads backed by Azure Disks. You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. For example, you can scale up by using disk pools instead of scaling clusters if you host storage-intensive workloads. You can also use disks to replicate data from on-premises or primary VMware vSphere environments to disk storage for the secondary site. To scale storage independent of the Azure VMware Solution hosts, we support surfacing [ultra disks](../virtual-machines/disks-types.md#ultra-disks), [premium SSD](../virtual-machines/disks-types.md#premium-ssds) and [standard SSD](../virtual-machines/disks-types.md#standard-ssds) as the datastores. >[!IMPORTANT]
->Azure disk pools on Azure VMware Solution (Preview) is currently in public preview.
->This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
->For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>We are officially halting the preview of Azure Disk Pools, and it will not be made generally available.
+>New customers will not be able to register the `Microsoft.StoragePool` resource provider on their subscription and deploy new Disk Pools.
+> Existing subscriptions registered with Microsoft.StoragePool may continue to deploy and manage disk pools for the time being.
Azure managed disks are attached to one iSCSI controller virtual machine deployed under the Azure VMware Solution resource group. Disks get deployed as storage targets to a disk pool, and each storage target shows as an iSCSI LUN under the iSCSI target. You can expose a disk pool as an iSCSI target connected to Azure VMware Solution hosts as a datastore. A disk pool surfaces as a single endpoint for all underlying disks added as storage targets. Each disk pool can have only one iSCSI controller.
backup Use Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-archive-tier-support.md
Title: Use Archive tier description: Learn about using Archive tier Support for Azure Backup. Previously updated : 07/04/2022 Last updated : 10/03/2022 zone_pivot_groups: backup-client-portaltier-powershelltier-clitier
You can now view all the recovery points that are moved to archive.
:::image type="content" source="./media/use-archive-tier-support/view-recovery-points-list-inline.png" alt-text="Screenshot showing the list of recovery points." lightbox="./media/use-archive-tier-support/view-recovery-points-list-expanded.png":::
-## Enable Smart Tiering to Vault-archive using a backup policy (preview)
+## Enable Smart Tiering to Vault-archive using a backup policy
-You can automatically move all eligible/recommended recovery points to vault-archive by configuring the required settings in the backup policy.
-
->[!Note]
->This feature is currently in preview. Enable your subscription to use this feature.
-
-### Enable a subscription for Smart Tiering (preview)
-
-To enable a subscription, follow these steps:
-
-1. In the Azure portal, select the subscription you want to enable.
-
-1. Select **Preview Features** in the left pane.
-
- :::image type="content" source="./media/use-archive-tier-support/select-preview-feature-inline.png" alt-text="Screenshot showing to select the Preview Feature option." lightbox="./media/use-archive-tier-support/select-preview-feature-expanded.png":::
-
-1. Select **Smart Tiering for Azure Backup**.
-
- :::image type="content" source="./media/use-archive-tier-support/select-smart-tiering-for-archive-inline.png" alt-text="Screenshot showing to select Smart Tiering for Archive option." lightbox="./media/use-archive-tier-support/select-smart-tiering-for-archive-expanded.png":::
-
-1. Select **Register**.
-
-The subscription gets enabled for Smart Tiering in a few minutes.
+You can automatically move all eligible/recommended recovery points to Vault-archive by configuring the required settings in the backup policy.
### Enable Smart Tiering for Azure Virtual Machine
To enable Smart Tiering for Azure VM backup policies, follow these steps:
1. In **Backup policy**, select **Enable Tiering**.
- :::image type="content" source="./media/use-archive-tier-support/select-enable-tiering-inline.png" alt-text="Screenshot showing to select the Enable Tiering option." lightbox="./media/use-archive-tier-support/select-enable-tiering-expanded.png":::
- 1. Select one of the following options to move to Vault-archive tier: - **Recommended recovery points**: This option moves all recommended recovery points to the vault-archive tier. [Learn more](archive-tier-support.md#archive-recommendations-only-for-azure-virtual-machines) about recommendations. - **Eligible recovery points**: This option moves all eligible recovery point after a specific number of days.
- :::image type="content" source="./media/use-archive-tier-support/select-eligible-recovery-points-inline.png" alt-text="Screenshot showing to select the Eligible recovery points option." lightbox="./media/use-archive-tier-support/select-eligible-recovery-points-expanded.png":::
+ :::image type="content" source="./media/use-archive-tier-support/select-eligible-recovery-points-inline.png" alt-text="Screenshot showing to select the Eligible recovery points option." lightbox="./media/use-archive-tier-support/select-eligible-recovery-points-expanded.png":::
>[!Note] >- The value of *x* can range from *3 months* to *(monthly/yearly retention in months -6)*.
You can perform the following operations using the sample scripts provided by Az
You can also write a script as per your requirements or modify the above sample scripts to fetch the required backup items.
+## Enable Smart Tiering to Vault-archive using a backup policy.
+
+You can automatically move all eligible/ recommended recovery points to vault-archive using a backup policy.
+
+In the following sections, you'll learn how to enable Smart Tiering for eligible recovery points.
+
+### Create a policy
+
+To create and configure a policy, run the following cmdlets:
+
+1. Fetch the vault name:
+
+ ```azurepowershell
+ $vault = Get-AzRecoveryServicesVault -ResourceGroupName "testRG" -Name "TestVault"
+ ```
+
+1. Set the policy schedule:
+
+ ```azurepowershell
+ $schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType AzureVM -BackupManagementType AzureVM -PolicySubType Enhanced -ScheduleRunFrequency Weekly
+ ```
+
+1. Set long-term retention point retention:
+
+ ```azurepowershell
+ $retPol = Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType AzureVM -BackupManagementType AzureVM -ScheduleRunFrequency Weekly
+ ```
+
+### Configure Smart Tiering
+
+You can now configure Smart Tiering to move recovery points to Vault-archive and retain them using the backup policy.
+
+>[!Note]
+>After configuration, Smart Tiering is automatically enabled and moves the recovery points to Vault-archive.
+
+#### Tier recommended recovery points for Azure Virtual Machines
+
+To tier all recommended recovery points to Vault-archive, run the following cmdlet:
+
+```azurepowershell
+$pol = New-AzRecoveryServicesBackupProtectionPolicy -Name TestPolicy -WorkloadType AzureVM -BackupManagementType AzureVM -RetentionPolicy $retPol -SchedulePolicy $schPol -VaultId $vault.ID -MoveToArchiveTier $true -TieringMode TierRecommended
+```
+
+[Learn more](archive-tier-support.md#archive-recommendations-only-for-azure-virtual-machines) about archive recommendations for Azure VMs.
+
+If the policy doesn't match the Vault-archive criteria, the following error appears:
+
+```Output
+New-AzRecoveryServicesBackupProtectionPolicy: TierAfterDuration needs to be >= 3 months, at least one of monthly or yearly retention should be >= (TierAfterDuration + 6) months
+```
+>[!Note]
+>*Tier recommended* is supported for Azure Virtual Machines, and not for SQL Server in Azure Virtual Machines.
+
+#### Tier all eligible Azure Virtual Machines backup items
+
+To tier all eligible Azure VM recovery points to Vault-archive, specify the number of months after which you want to move the recovery points, and run the following cmdlet:
+
+```azurepowershell
+$pol = New-AzRecoveryServicesBackupProtectionPolicy -Name hiagaVMArchiveTierAfter -WorkloadType AzureVM -BackupManagementType AzureVM -RetentionPolicy $retPol -SchedulePolicy $schPol -VaultId $vault.ID -MoveToArchiveTier $true -TieringMode TierAllEligible -TierAfterDuration 3 -TierAfterDurationType Months
+```
+
+>[!Note]
+>- The number of months must range from *3* to *(Retention - 6)* months.
+>- Enabling Smart Tiering for eligible recovery points can increase your overall costs.
++
+#### Tier all eligible SQL Server in Azure VM backup items
+
+To tier all eligible SQL Server in Azure VM recovery points to Vault-archive, specify the number of days after which you want to move the recovery points and run the following cmdlet:
+
+```azurepowershell
+$pol = New-AzRecoveryServicesBackupProtectionPolicy -Name SQLArchivePolicy -WorkloadType MSSQL -BackupManagementType AzureWorkload -RetentionPolicy $retPol -SchedulePolicy $schPol -VaultId $vault.ID -MoveToArchiveTier $true -TieringMode TierAllEligible -TierAfterDuration 40 -TierAfterDurationType Days
+```
+
+>[!Note]
+>The number of days must range from *45* to *(Retention ΓÇô 180)* days.
+
+If the policy isn't eligible for Vault-archive, the following error appears:
+
+```Output
+New-AzRecoveryServicesBackupProtectionPolicy: TierAfterDuration needs to be >= 45 Days, at least one retention policy for full backup (daily / weekly / monthly / yearly) should be >= (TierAfter + 180) days
+```
+## Modify a policy
+
+To modify an existing policy, run the following cmdlet:
+
+```azurepowershell
+$pol = Get-AzRecoveryServicesBackupProtectionPolicy -VaultId $vault.ID | Where { $_.Name -match "Archive" }
+```
+
+## Disable Smart Tiering
+
+To disable Smart Tiering to archive recovery points, run the following cmdlet:
+
+```azurepowershell
+Set-AzRecoveryServicesBackupProtectionPolicy -VaultId $vault.ID -Policy $pol[0] -MoveToArchiveTier $false
+```
+
+### Enable Smart Tiering
+
+To enable Smart Tiering after you've disable it, run the following cmdlet:
+
+- **Azure Virtual Machine**
+
+ ```azurepowershell
+ Set-AzRecoveryServicesBackupProtectionPolicy -VaultId $vault.ID -Policy $pol[0] -MoveToArchiveTier $true -TieringMode TierRecommended
+ ```
+
+- **Azure SQL Server in Azure VMs**
+
+ ```azurepowershell
+ Set-AzRecoveryServicesBackupProtectionPolicy -VaultId $vault.ID -Policy $pol[1] -MoveToArchiveTier $true -TieringMode TierAllEligible -TierAfterDuration 45 -TierAfterDurationType Days
+ ```
+ ## Next steps - Use Archive tier support via [Azure portal](?pivots=client-portaltier)/[CLI](?pivots=client-clitier).-- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md)
+- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md).
::: zone-end
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
Title: Batch Certificate Migration Guide
-description: Describes the migration steps for the batch certificates and the end of support details.
+ Title: Migrate Batch certificates to Azure Key Vault
+description: Learn how to migrate access management from using certificates in Azure Batch to Azure Key Vault and plan for feature end of support.
-+ Last updated 08/15/2022
-# Batch Certificate Migration Guide
-Securing the application and critical information has become essential in today's needs. With growing customers and increasing demand for security, managing key information plays a significant role in securing data. Many customers need to store secure data in the application, and it needs to be managed to avoid any leakage. In addition, only legitimate administrators or authorized users should access it. Azure Batch offers Certificates created and managed by the Batch service. Azure Batch also provides a Key Vault option, and it's considered an azure-standard method for delivering more controlled secure access management.
+# Migrate Batch certificates to Azure Key Vault
-Azure Batch provides certificates feature at the account level. Customers must generate the Certificate and upload it manually to the Azure Batch via the portal. To access the Certificate, it must be associated and installed for the 'Current User.' The Certificate is usually valid for one year and must follow a similar procedure every year.
+On *February 29, 2024*, the certificates feature for Azure Batch access management will be retired. Learn how to migrate your access management approach from using certificates in Azure Batch to using Azure Key Vault.
-For Azure Batch customers, a secure way of access should be provided in a more standardized way, reducing any manual interruption and reducing the readability of key generated. Therefore, we'll retire the certificate feature on **29 February 2024** to reduce the maintenance effort and better guide customers to use Azure Key Vault as a standard and more modern method with advanced security. After it's retired, the Certificate functionality may cease working properly. Additionally, pool creation with certificates will be rejected and possibly resize up.
+## About the feature
-## Retirement alternatives
+Often, you need to store secure data for an application. Your data must be securely managed so that only administrators or authorized users can access it.
-Azure Key Vault is the service provided by Microsoft Azure to store and manage secrets, certificates, tokens, keys, and other configuration values that authenticated users access the applications and services. The original idea was to remove the hard-coded storing of these secrets and keys in the application code.
+Currently, Azure Batch offers two ways to secure access. You can use a certificate that you create and manage in Azure Batch or you can use Azure Key Vault to store an access key. Using a key vault is an Azure-standard way to deliver more controlled secure access management.
-Azure Key Vault provides security at the transport layer by ensuring any data flow from the key vault to the client application is encrypted. Azure key vault stores the secrets and keys with such strong encryption that even Microsoft itself won't see the keys or secrets in any way.
+You can use a certificate at the account level in Azure Batch. You must generate the certificate and upload it manually to Batch by using the Azure portal. To access the certificate, the certificate must be associated with and installed for only the current user. A certificate typically is valid for one year, and it must be updated each year.
-Azure Key Vault provides a secure way to store the information and define the fine-grained access control. All the secrets can be managed from one dashboard. Azure Key Vault can store the key in the software-protected or hardware protected by hardware security module (HSMs) mechanism. In addition, it has a mechanism to auto-renew the Key Vault certificates.
+## Feature end of support
-## Migration steps
+To move toward a simpler, standardized way to secure access to your Batch resources, on February 29, 2024, we'll retire the certificates feature in Azure Batch. We recommend that you use Azure Key Vault as a standard and more modern method to secure your resources in Batch.
-Azure Key Vault can be created in three ways:
+In Key Vault, you get these benefits:
-1. Using Azure portal
+- Reduced manual maintenance and streamlined maintenance overall
+- Reduced access to and readability of the key that's generated
+- Advanced security
-2. Using PowerShell
+After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch might not work as expected. After that date, you won't be able to create a pool by using a certificate. Pools that continue to use certificates after the feature is retired might increase in size and cost.
-3. Using CLI
+## Alternative: Use Key Vault
-**Create Azure Key Vault step by step procedure using Azure portal:**
+Azure Key Vault is an Azure service you can use to store and manage secrets, certificates, tokens, keys, and other configuration values that give authenticated users access to secure applications and services. Key Vault is based on the idea that security is improved and standardized when you remove hard-coded secrets and keys from application code that's deployed.
-__Prerequisite__: Valid Azure subscription and owner/contributor access on Key Vault service.
+Key Vault provides security at the transport layer by ensuring that any data flow from the key vault to the client application is encrypted. Azure Key Vault stores secrets and keys with such strong encryption that even Microsoft can't read key vault-protected keys and secrets.
- 1. Log in to the Azure portal.
+Azure Key Vault gives you a secure way to store essential access information and to set fine-grained access control. You can manage all secrets from one dashboard. Choose to store a key in either software-protected or hardware-protected hardware security modules (HSMs). You also can set Key Vault to auto-renew certificates
- 2. In the top-level search box, look for **Key Vaults**.
+## Create a key vault
- 3. In the Key Vault dashboard, click on create and provide all the details like subscription, resource group, Key Vault name, select the pricing tier (standard/premium), and select region. Once all these details are provided, click on review, and create. This will create the Key Vault account.
+To create a key vault to manage access for Batch resources, use one of the following options:
- 4. Key Vault names need to be unique across the globe. Once any user has taken a name, it wonΓÇÖt be available for other users.
+- Azure portal
+- PowerShell
+- Azure CLI
- 5. Now go to the newly created Azure Key Vault. There you can see the vault name and the vault URI used to access the vault.
+### Create a key vault by using the Azure portal
-**Create Azure Key Vault step by step using the Azure PowerShell:**
+- **Prerequisites**: To create a key vault by using the Azure portal, you must have a valid Azure subscription and Owner or Contributor access for Azure Key Vault.
- 1. Log in to the user PowerShell using the following command - Login-AzAccount
+To create a key vault:
- 2. Create an 'azure secure' resource group in the 'eastus' location. You can change the name and location as per your need.
-```
- New-AzResourceGroup -Name "azuresecure" -Location "EastUS"
-```
- 3. Create the Azure Key Vault using the cmdlet. You need to provide the key vault name, resource group, and location.
-```
- New-AzKeyVault -Name "azuresecureKeyVault" -ResourceGroupName "azuresecure" -Location "East US"
-```
+1. Sign in to the Azure portal.
- 4. Created the Azure Key Vault successfully using the PowerShell cmdlet.
+1. Search for **key vaults**.
-**Create Azure Key Vault step by step using the Azure CLI bash:**
+1. In the Key Vault dashboard, select **Create**.
- 1. Create an 'azure secure' resource in the 'eastus' location. You can change the name and location as per your need. Use the following bash command.
-```
- az group create ΓÇôname "azuresecure" -l "EastUS."
-```
+1. Enter or select your subscription, a resource group name, a key vault name, the pricing tier (Standard or Premium), and the region closest to your users. Each key vault name must be unique in Azure.
- 2. Create the Azure Key Vault using the bash command. You need to provide the key vault name, resource group, and location.
-```
- az keyvault create ΓÇôname ΓÇ£azuresecureKeyVaultΓÇ¥ ΓÇôresource-group ΓÇ£azureΓÇ¥ ΓÇôlocation ΓÇ£EastUSΓÇ¥
-```
- 3. Successfully created the Azure Key Vault using the Azure CLI bash command.
+1. Select **Review**, and then select **Create** to create the key vault account.
-## FAQ
+1. Go to the key vault you created. The key vault name and the URI you use to access the vault are shown under deployment details.
- 1. Is Certificates or Azure Key Vault recommended?
- Azure Key Vault is recommended and essential to protect the data in the cloud.
+For more information, see [Quickstart: Create a key vault by using the Azure portal](../key-vault/general/quick-create-portal.md).
- 2. Does user subscription mode support Azure Key Vault?
- Yes, it's mandatory to create Key Vault while creating the Batch account in user subscription mode.
+### Create a key vault by using PowerShell
- 3. Are there best practices to use Azure Key Vault?
- Best practices are covered [here](../key-vault/general/best-practices.md).
+1. Use the PowerShell option in Azure Cloud Shell to sign in to your account:
+
+ ```powershell
+ Login-AzAccount
+ ```
+
+1. Use the following command to create a new resource group in the region that's closest to your users. For the `<placeholder>` values, enter the information for the Key Vault instance you want to create.
+
+ ```powershell
+ New-AzResourceGroup -Name <ResourceGroupName> -Location <Location>
+ ```
+
+1. Use the following cmdlet to create the key vault. For the `<placeholder>` values, use the use key vault name, resource group name, and region for the key vault you want to create.
+
+ ```powershell
+ New-AzKeyVault -Name <KeyVaultName> -ResourceGroupName <ResourceGroupName> -Location <Location>
+ ```
+
+For more information, see [Quickstart: Create a key vault by using PowerShell](../key-vault/general/quick-create-powershell.md).
+
+### Create a key vault by using the Azure CLI
+
+1. Use the Bash option in the Azure CLI to create a new resource group in the region that's closest to your users. For the `<placeholder>` values, enter the information for the Key Vault instance you want to create.
+
+ ```bash
+ az group create -name <ResourceGroupName> -l <Location>
+ ```
+
+1. Create the key vault by using the following command. For the `<placeholder>` values, use the use key vault name, resource group name, and region for the key vault you want to create.
+
+ ```bash
+ az keyvault create -name <KeyVaultName> -resource-group <ResourceGroupName> -location <Location>
+ ```
+
+For more information, see [Quickstart: Create a key vault by using the Azure CLI](../key-vault/general/quick-create-cli.md).
+
+## FAQs
+
+- Does Microsoft recommend using Azure Key Vault for access management in Batch?
+
+ Yes. We recommend that you use Azure Key Vault as part of your approach to essential data protection in the cloud.
+
+- Does user subscription mode support Azure Key Vault?
+
+ Yes. In user subscription mode, you must create the key vault at the time you create the Batch account.
+
+- Where can I find best practices for using Azure Key Vault?
+
+ See [Azure Key Vault best practices](../key-vault/general/best-practices.md).
## Next steps
-For more information, see [Certificate Access Control](../key-vault/certificates/certificate-access-control.md).
+For more information, see [Key Vault certificate access control](../key-vault/certificates/certificate-access-control.md).
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
Title: Batch Pools without Public IP Addresses Classic Retirement Migration Guide
-description: Describes the migration steps for the batch pool without public ip addresses and the end of support details.
+ Title: Migrate pools without public IP addresses (classic) in Batch
+description: Learn how to opt in to migrate Azure Batch pools without public IP addresses (classic) and plan for feature end of support.
-+ Last updated 09/01/2022
-# Batch Pools without Public IP Addresses Classic Retirement Migration Guide
-By default, all the compute nodes in an Azure Batch virtual machine (VM) configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet. To restrict access to these nodes and reduce the discoverability of these nodes from the internet, we released [Batch pools without public IP addresses (classic)](./batch-pool-no-public-ip-address.md).
+# Migrate pools without public IP addresses (classic) in Batch
-In late 2021, we launched a simplified compute node communication model for Azure Batch. The new communication model improves security and simplifies the user experience. Batch pools no longer require inbound Internet access and outbound access to Azure Storage, only outbound access to the Batch service. As a result, Batch pools without public IP addresses (classic) which is currently in public preview will be retired on **31 March 2023**, and will be replaced with simplified compute node communication pools without public IPs.
+The Azure Batch feature pools without public IP addresses (classic) will be retired on *March 31, 2023*. Learn how to migrate eligible pools to simplified compute node communication (preview) pools without public IP addresses. You must opt in to migrate your Batch pools.
-## Retirement alternatives
+## About the feature
-[Simplified Compute Node Communication Pools without Public IPs](./simplified-node-communication-pool-no-public-ip.md) requires using simplified compute node communication. It provides customers with enhanced security for their workload environments on network isolation and data exfiltration to Azure Batch accounts. Its key benefits include:
+By default, all the compute nodes in an Azure Batch virtual machine (VM) configuration pool are assigned a public IP address. The Batch service uses the IP address to schedule tasks and for communication with compute nodes, including outbound access to the internet. To restrict access to these nodes and reduce public discoverability of the nodes, we released the Batch feature [pools without public IP addresses (classic)](./batch-pool-no-public-ip-address.md). Currently, the feature is in preview.
-* Allow creating simplified node communication pool without public IP addresses.
-* Support Batch private pool using a new private endpoint (sub-resource: **nodeManagement**) for Azure Batch account.
-* Simplified private link DNS zone for Batch account private endpoints: changed from **privatelink.\<region>.batch.azure.com** to **privatelink.batch.azure.com**.
-* Mutable public network access for Batch accounts.
-* Firewall support for Batch account public endpoints: configure IP address network rules to restrict public network access with Batch accounts.
+## Feature end of support
-## Migration steps
+In late 2021, we launched a simplified compute node communication model for Azure Batch. The new communication model improves security and simplifies the user experience. Batch pools no longer require inbound internet access and outbound access to Azure Storage. Batch pools now need only outbound access to the Batch service. As a result, on March 31, 2023, we'll retire the Batch feature pools without public IP addresses (classic). The feature will be replaced in Batch with simplified compute node communication for pools without public IP addresses.
-Batch pool without public IP addresses (classic) will retire on **31 March 2023** and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication:
+## Alternative: Use simplified node communication
+
+The alternative to using a Batch pool without a public IP address (classic) requires using [simplified node communication](./simplified-node-communication-pool-no-public-ip.md). The option gives you enhanced security for your workload environments on network isolation and data exfiltration to Batch accounts. The key benefits include:
+
+- You can create simplified node communication pools without public IP addresses.
+- You can create a Batch private pool by using a new private endpoint (in the nodeManagement subresource) for an Azure Batch account.
+- Use a simplified private link DNS zone for Batch account private endpoints. The private link changes from `privatelink.<region>.batch.azure.com` to `privatelink.batch.azure.com`.
+- Use mutable public network access for Batch accounts.
+- Get firewall support for Batch account public endpoints. You can configure IP address network rules to restrict public network access to your Batch account.
+
+## Opt in and migrate your eligible pools
+
+When the Batch feature pools without public IP addresses (classic) retires on March 31, 2023, existing pools that use the feature can migrate only if the pools were created in a virtual network. To migrate your eligible pools, complete the opt-in process to use simplified compute node communication:
1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication).
- ![Support Request](../batch/media/certificates/opt-in.png)
+ :::image type="content" source="media/certificates/opt-in.png" alt-text="Screenshot that shows creating a support request to opt in.":::
-2. Create a private endpoint for Batch node management in the virtual network.
+1. Create a private endpoint for Batch node management in the virtual network.
- ![Create Endpoint](../batch/media/certificates/private-endpoint.png)
+ :::image type="content" source="media/certificates/private-endpoint.png" alt-text="Screenshot that shows how to create an endpoint.":::
-3. Scale down the pool to zero nodes.
+1. Scale down the pool to zero nodes.
- ![Scale Down](../batch/media/certificates/scale-down-pool.png)
+ :::image type="content" source="media/certificates/scale-down-pool.png" alt-text="Screenshot that shows how to scale down a pool.":::
-4. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
- ![Scale Out](../batch/media/certificates/scale-out-pool.png)
+ :::image type="content" source="media/certificates/scale-out-pool.png" alt-text="Screenshot that shows how to scale out a pool.":::
-## FAQ
+## FAQs
-* How can I migrate my Batch pool without public IP addresses (classic) to simplified compute node communication pools without public IPs?
+- How can I migrate my Batch pools that use the pools without public IP addresses (classic) feature to simplified compute node communication?
- You can only migrate your pool to simplified compute node communication pools if it was created in a virtual network. Otherwise, youΓÇÖd need to create a new simplified compute node communication pool without public IPs.
+ If you created the pools in a virtual network, [opt in and complete the migration process](#opt-in-and-migrate-your-eligible-pools).
+
+ If your pools weren't created in a virtual network, create a new simplified compute node communication pool without public IP addresses.
-* What differences will I see in billing?
+- What differences will I see in billing?
- Compared with Batch pools without public IP addresses (classic), the simplified compute node communication pools without public IPs support will reduce costs because it wonΓÇÖt need to create network resources the following: load balancer, network security groups, and private link service with the Batch pool deployments. However, there will be a [cost associated with private link](https://azure.microsoft.com/pricing/details/private-link/) or other outbound network connectivity used by pools, as controlled by the user, to allow communication with the Batch service without public IP addresses.
+ Compared to Batch pools without public IP addresses (classic), the simplified compute node communication pools without public IP addresses support reduces cost because it doesn't create the following network resources with Batch pool deployments: load balancer, network security groups, and private link service. However, you'll see a cost associated with [Azure Private Link](https://azure.microsoft.com/pricing/details/private-link/) or other outbound network connectivity that your pools use for communication with the Batch service.
-* Will there be any performance changes?
+- Will I see any changes in performance?
- No known performance differences compared to Batch pools without public IP addresses (classic).
+ No known performance differences exist for simplified compute node communication pools without public IP addresses compared to Batch pools without public IP addresses (classic).
-* How can I connect to my pool nodes for troubleshooting?
+- How can I connect to my pool nodes for troubleshooting?
- Similar to Batch pools without public IP addresses (classic). As there's no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md).
+ The process is similar to the way you connect for pools without public IP addresses (classic). Because there the Batch pool doesn't have a public IP address, connect to your pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use a remote connectivity solution like [Azure Bastion](../bastion/bastion-overview.md).
-* Will there be any change to how my workloads are downloaded from Azure Storage?
+- Will there be any change to how my workloads are downloaded from Azure Storage?
- Similar to Batch pools without public IP addresses (classic), users will need to provide their own internet outbound connectivity if their workloads need access to other resources like Azure Storage.
+ Like for Batch pools without public IP addresses (classic), you must provide your own internet outbound connectivity if your workloads need access to a resource like Azure Storage.
-* What if I donΓÇÖt migrate to simplified compute node communication pools without public IPs?
+- What if I donΓÇÖt migrate my pools to simplified compute node communication pools without public IP addresses?
- After **31 March 2023**, we'll stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale-out operations, or may be actively scaled down to zero at any point in time after that date.
+ After *March 31, 2023*, we will stop supporting Batch pools without public IP addresses (classic). After that date, existing pool functionality, including scale-out operations, might break. The pool might actively be scaled down to zero at any time.
## Next steps
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
Title: Batch Tls 1.0 Migration Guide
-description: Describes the migration steps for the batch TLS 1.0 and the end of support details.
+ Title: Migrate client code to TLS 1.2 in Azure Batch
+description: Learn how to migrate client code to TLS 1.2 in Azure Batch to plan for end of support for TLS 1.0 and TLS 1.1.
-+ Last updated 08/16/2022
-# Batch TLS 1.0 Migration Guide
-Transport Layer Security (TLS) versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. They also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. There's an industry-wide push toward the exclusive use of TLS version 1.2 or later.
+# Migrate client code to TLS 1.2 in Batch
-To follow security best practices and remain in compliance with industry standards, Azure Batch will retire Batch TLS 1.0/1.1 on **31 March 2023**. Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0/1.1 can be identified via existing BatchOperation telemetry. Customers will need to adjust their existing workflows to ensure that they're using TLS 1.2. Failure to migrate to TLS 1.2 will break existing Batch workflows.
+To support security best practices and remain in compliance with industry standards, Azure Batch will retire Transport Layer Security (TLS) 1.0 and TLS 1.1 in Azure Batch on *March 31, 2023*. Learn how to migrate to TLS 1.2 in the client code you manage by using Batch.
-## Migration strategy
+## End of support for TLS 1.0 and TLS 1.1 in Batch
-Customers must update client code before the TLS 1.0/1.1 retirement.
+TLS versions 1.0 and TLS 1.1 are known to be susceptible to BEAST and POODLE attacks and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. TLS 1.0 and TLS 1.1 don't support the modern encryption methods and cipher suites that the Payment Card Industry (PCI) compliance standards recommends. Microsoft is participating in an industry-wide push toward the exclusive use of TLS version 1.2 or later.
-- Customers using native WinHTTP for client code can follow this [guide](https://support.microsoft.com/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392).
+Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0 or TLS 1.1 can be identified via existing BatchOperation data. If you're using TLS 1.0 or TLS 1.1, to avoid disruption to your Batch workflows, update existing workflows to use TLS 1.2.
-- Customers using .NET Framework for their client code should upgrade to .NET > 4.7, that which enforces TLS 1.2 by default.
+## Alternative: Use TLS 1.2
-- For customers using .NET Framework who are unable to upgrade to > 4.7, please follow this [guide](/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
+To avoid disruption to your Batch workflows, you must update your client code to use TLS 1.2 before the TLS 1.0 and TLS 1.1 retirement in Batch on March 31, 2023.
-For TLS best practices, refer to [TLS best practices for .NET Framework](/dotnet/framework/network-programming/tls).
+For specific development use cases, see the following information:
-## FAQ
+- If you use native WinHTTP for your client application code, see the guidance in [Update to enable TLS 1.1 and TLS 1.2 as default security protocols](https://support.microsoft.com/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392).
-* Why must we upgrade to TLS 1.2?<br>
- TLS 1.0/1.1 has security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008 and is the current default version in most frameworks.
+- If you use the .NET Framework for your client application code, upgrade to .NET 4.7 or later. Beginning in .NET 4.7, TLS 1.2 is enforced by default.
-* What happens if I donΓÇÖt upgrade?<br>
- After the feature retirement, our client application won't work until you upgrade.<br>
+- If you use the .NET Framework and you *can't* upgrade to .NET 4.7 or later, see the guidance in [TLS for network programming](/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
-* Will Upgrading to TLS 1.2 affect the performance?<br>
- Upgrading to TLS 1.2 won't affect performance.<br>
+For more information, see [TLS best practices for the .NET Framework](/dotnet/framework/network-programming/tls).
-* How do I know if IΓÇÖm using TLS 1.0/1.1?<br>
- You can check the Audit Log to determine the TLS version you're using.
+## FAQs
+
+- Why do I need to upgrade to TLS 1.2?
+
+ TLS 1.0 and TLS 1.1 have security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008. TLS 1.2 is the current default version in most development frameworks.
+
+- What happens if I donΓÇÖt upgrade?
+
+ After the feature retirement from Azure Batch, your client application won't work until you upgrade the code to use TLS 1.2.
+
+- Will upgrading to TLS 1.2 affect the performance of my application?
+
+ Upgrading to TLS 1.2 won't affect your application's performance.
+
+- How do I know if IΓÇÖm using TLS 1.0 or TLS 1.1?
+
+ To determine the TLS version you're using, check the audit log for your Batch deployment.
## Next steps
-For more information, see [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
+For more information, see [Enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
Title: Batch job pool lifetime statistics migration guide
-description: Describes the migration steps for the batch job pool lifetime statistics and the end of support details.
+ Title: Migrate from job and pool lifetime statistics to logs in Azure Batch
+description: Learn how to migrate your Batch monitoring approach from using job and pool lifetime statistics API to using logs and plan for feature end of support.
-+ Last updated 08/15/2022
-# Batch Job Pool Lifetime Statistics Migration Guide
-The Azure Batch service currently supports API for Job/Pool to retrieve lifetime statistics. The API is used to get lifetime statistics for all the Pools/Jobs in the specified batch account or for a specified Pool/Job. The API collects the statistical data from when the Batch account was created until the last time updated or entire lifetime of the specified Job/Pool. Job/Pool lifetime statistics API is helpful for customers to analyze and evaluate their usage.
+# Migrate from job and pool lifetime statistics API to logs in Batch
-To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes 11 VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
+The Azure Batch lifetime statistics API for jobs and pools will be retired on *April 30, 2023*. Learn how to migrate your Batch monitoring approach from using the lifetime statistics API to using logs.
-The MapReduce aggregation logic was implemented with legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repo need to be updated frequently to accommodate ever growing load in production and to meet security/compliance requirements. In addition, since the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers aren't using the API. Batch service currently eats up all the compute and storage usage charges associated with MapReduce pools and jobs.
+## About the feature
-The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we're retiring the Job/Pool Lifetime.
+Currently, you can use API to retrieve lifetime statistics for jobs and pools in Batch. You can use the API to get lifetime statistics for all the jobs and pools in a Batch account or for a specific job or pool. The API collects statistical data from when the Batch account was created until the last time the account was updated or from when a job or pool was created. A customer might use the job and pool lifetime statistics API to help them analyze and evaluate their Batch usage.
-Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once complete, the API will no longer work and will return an appropriate HTTP response error code back to the client.
+To make statistical data available to customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to do a periodic, background rollup of statistics. The aggregation is performed for all Batch accounts, pools, and jobs in each region, regardless of whether a customer needs or queries the stats for their account, pool, or job. The operating cost includes 11 VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
-## FAQ
+The MapReduce aggregation logic was implemented by using legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repository need to be updated frequently to accommodate increased loads in production and to meet security and compliance requirements. Also, because the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers don't use the API. The Batch service currently uses all the compute and storage usage charges that are associated with MapReduce pools and jobs.
-* Is there an alternate way to view logs of Pool/Jobs?
+## Feature end of support
- Azure portal has various options to enable the logs, namely system logs, diagnostic logs. See [Monitor Batch Solutions](./monitoring-overview.md) for more information.
+The lifetime statistics API is designed and maintained to help you troubleshoot your Batch services. However, not many customers actually use the API. The customers who use the API are interested in extracting details for not more than a month. More advanced ways of getting data about logs, pools, and jobs can be collected and used on a need basis by using Azure portal logs, alerts, log export, and other methods.
-* Can customers extract logs to their system if the API doesn't exist?
+When the job and pool lifetime statistics API is retired on April 30, 2023, the API will no longer work, and it will return an appropriate HTTP response error code to the client.
- Azure portal log feature allows every customer to extract the output and error logs to their workspace. See [Monitor with Application Insights](./monitor-application-insights.md) for more information.
+## Alternative: Set up logs in the Azure portal
+
+The Azure portal has various options to enable the logs. System logs and diagnostic logs can provide statistical data. For more information, see [Monitor Batch solutions](./monitoring-overview.md).
+
+## FAQs
+
+- Is there an alternate way to view logs for pools and jobs?
+
+ The Azure portal has various options to enable the logs. Specifically, you can view system logs and diagnostic logs. For more information, see [Monitor Batch solutions](./monitoring-overview.md).
+
+- Can I extract logs to my system if the API doesn't exist?
+
+ You can use the Azure portal log feature to extract output and error logs to your workspace. For more information, see [Monitor with Application Insights](./monitor-application-insights.md).
## Next steps
batch Low Priority Vms Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md
Title: Low priority vms retirement migration guide
-description: Describes the migration steps for the low priority vms retirement and the end of support details.
+ Title: Migrate low-priority VMs to spot VMs in Batch
+description: Learn how to migrate Azure Batch low-priority VMs to Azure Spot Virtual Machines and plan for feature end of support.
-+ Last updated 08/10/2022
-# Low Priority VMs Retirement Migration Guide
-Azure Batch offers Low priority and Spot virtual machines (VMs). The virtual machines are computing instances allocated from spare capacity, offered at a highly discounted rate compared to "on-demand" VMs.
+# Migrate Batch low-priority VMs to Azure Spot Virtual Machines
-Low priority VMs enable the customer to take advantage of unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. At any point in time when Azure needs the capacity back, we'll evict low-priority VMs. Therefore, the low-priority offering is excellent for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept. In addition, low-priority VMs can easily be deployed through our virtual machine scale set offering.
+The Azure Batch feature low-priority virtual machines (VMs) is being retired on *September 30, 2025*. Learn how to migrate your Batch low-priority VMs to Azure Spot Virtual Machines.
-Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and are generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned.
+## About the feature
-## Retirement alternative
+Currently, in Azure Batch, you can use a low-priority VM or a spot VM. Both types of VMs are Azure computing instances that are allocated from spare capacity and offered at a highly discounted rate compared to dedicated, on-demand VMs.
-As of May 2020, Azure offers Spot VMs in addition to Low Priority VMs. Like Low Priority, the Spot option allows the customer to purchase spare capacity at a deeply discounted price in exchange for the possibility that the VM may be evicted. Unlike Low Priority, you can use the Azure Spot option for single VMs and scale sets. Virtual machine scale sets scale up to meet demand, and when used with Spot VMs, will only allocate when capacity is available. 
+You can use low-priority VMs to take advantage of unused capacity in Azure. The amount of unused capacity that's available varies depending on factors like VM size, the region, and the time of day. At any time, when Microsoft needs the capacity back, we evict low-priority VMs. Therefore, the low-priority feature is excellent for flexible workloads like large processing jobs, dev and test environments, demos, and proofs of concept. It's easy to deploy low-priority VMs by using a virtual machine scale set.
-The Spot VMs can be evicted when Azure needs the capacity or when the price goes above your maximum price. In addition, the customer can choose to get a 30-second eviction notice and attempt to redeploy. 
+## Feature end of support
-The other key difference is that Azure Spot pricing is variable and based on the capacity for size or SKU in an Azure region. Prices change slowly to provide stabilization. The price will never go above pay-as-you-go rates.
+Low-priority VMs are a deprecated preview feature and won't be generally available. Spot VMs offered through the Azure Spot Virtual Machines service are the official, preemptible offering from the Azure compute platform. Spot Virtual Machines is generally available. On September 30, 2025, we'll retire the low-priority VMs feature. After that date, existing low-priority pools in Batch might no longer work and you can't provision new low-priority VMs.
-When it comes to eviction, you have two policy options to choose between:
+## Alternative: Use Azure Spot Virtual Machines
-* Stop/Deallocate (default) ΓÇô when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is the ideal for cases where the state is stored on disks.
-* Delete ΓÇô when evicted, the VM and underlying disks are deleted.
+As of May 2020, Azure offers spot VMs in Batch in addition to low-priority VMs. Like low-priority VMs, you can use the spot VM option to purchase spare capacity at a deeply discounted price in exchange for the possibility that the VM will be evicted. Unlike low-priority VMs, you can use the spot VM option for single VMs and scale sets. Virtual machine scale sets scale up to meet demand. When used with a spot VM, a virtual machine scale set allocates only when capacity is available.
-While similar in idea, there are a few key differences between these two purchasing options:
+A spot VM in Batch can be evicted when Azure needs the capacity or when the cost goes above your set maximum price. You also can choose to receive a 30-second eviction notice and attempt to redeploy.
-| | **Low Priority VMs** | **Spot VMs** |
+Spot VM pricing is variable and based on the capacity of a VM size or SKU in an Azure region. Prices change slowly to provide stabilization. The price will never go above pay-as-you-go rates.
+
+For VM eviction policy, you can choose from two options:
+
+- **Stop/Deallocate** (default): When a VM is evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This option is ideal when you store state on disks.
+
+- **Delete**: When a VM is evicted, the VM and underlying disks are deleted.
+
+Although the two purchasing options are similar, be aware of a few key differences:
+
+| Factor | Low-priority VMs | Spot VMs |
||||
-| **Availability** | **Azure Batch** | **Single VMs, Virtual machine scale sets** |
-| **Pricing** | **Fixed pricing** | **Variable pricing with ability to set maximum price** |
-| **Eviction/Preemption** | **Preempted when Azure needs the capacity. Tasks on preempted node VMs are re-queued and run again.** | **Evicted when Azure needs the capacity or if the price exceeds your maximum. If evicted for price and afterward the price goes below your maximum, the VM will not be automatically restarted.** |
+| Availability | Azure Batch | Single VMs, virtual machine scale sets |
+| Pricing | Fixed pricing | Variable pricing with ability to set maximum price |
+| Eviction or preemption | Preempted when Azure needs the capacity. Tasks on preempted node VMs are requeued and run again. | Evicted when Azure needs the capacity or if the price exceeds your maximum. If evicted for price and afterward the price goes below your maximum, the VM isn't automatically restarted. |
+
+## Migrate a low-priority VM pool or create a spot VM pool
+
+To include spot VMs when you scale in user subscription mode:
+
+1. In the Azure portal, select the Batch account and view an existing pool or create a new pool.
-## Migration steps
+1. Under **Scale**, select either **Target dedicated nodes** or **Target Spot/low-priority nodes**.
-Customers in User Subscription mode can include Spot VMs using the following the steps below:
+ :::image type="content" source="media/certificates/low-priority-vms-scale-target-nodes.png" alt-text="Screenshot that shows how to scale target nodes.":::
-1. In the Azure portal, select the Batch account and view the existing pool or create a new pool.
-2. Under **Scale**, users can choose 'Target dedicated nodes' or 'Target Spot/low-priority nodes.'
+1. For an existing pool, select the pool, and then select **Scale** to update the number of spot nodes required based on the job scheduled.
- ![Scale Target Nodes](../batch/media/certificates/low-priority-vms-scale-target-nodes.png)
+1. Select **Save**.
-3. Navigate to the existing Pool and select 'Scale' to update the number of Spot nodes required based on the job scheduled.
-4. Click **Save**.
+You can't use spot VMs in Batch managed mode. Instead, switch to user subscription mode and re-create the Batch account, pool, and jobs.
-Customers in Batch Managed mode must recreate the Batch account, pool, and jobs under User Subscription mode to take advantage of spot VMs.
+## FAQs
-## FAQ
+- How do I create a new Batch account, job, or pool?
-* How to create a new Batch account /job/pool?
+ See the [quickstart](./batch-account-create-portal.md) to create a new Batch account, job, or pool.
- See the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task.
+- Are spot VMs available in Batch managed mode?
-* Are Spot VMs available in Batch Managed mode?
+ No. In Batch accounts, spot VMs are available only in user subscription mode.
- No, Spot VMs are available in User Subscription mode - Batch accounts only.
+- What is the pricing and eviction policy of spot VMs? Can I view pricing history and eviction rates?
-* What is the pricing and eviction policy of Spot VMs? Can I view pricing history and eviction rates?
+ Yes. In the Azure portal, you can see historical pricing and eviction rates per size in a region.
- See [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal.
+ For more information about using spot VMs, see [Spot Virtual Machines](../virtual-machines/spot-vms.md).
## Next steps
-Use the [CLI](../virtual-machines/linux/spot-cli.md), [portal](../virtual-machines/spot-portal.md), [ARM template](../virtual-machines/linux/spot-template.md), or [PowerShell](../virtual-machines/windows/spot-powershell.md) to deploy Azure Spot Virtual Machines.
+Use the [CLI](../virtual-machines/linux/spot-cli.md), [Azure portal](../virtual-machines/spot-portal.md), [ARM template](../virtual-machines/linux/spot-template.md), or [PowerShell](../virtual-machines/windows/spot-powershell.md) to deploy Azure Spot Virtual Machines.
-You can also deploy a [scale set with Azure Spot Virtual Machine instances](../virtual-machine-scale-sets/use-spot.md).
+You can also deploy a [scale set that has Azure Spot Virtual Machines instances](../virtual-machine-scale-sets/use-spot.md).
cdn Cdn Verizon Premium Rules Engine Reference Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine-reference-features.md
The available types of features are:
* [Origin](#origin) * [Specialty](#specialty) * [URL](#url)
-* [Web Application Firewall](#waf)
### <a name="access"></a>Access
These features allow a request to be redirected or rewritten to a different URL.
**[Back to the top](#top)**
-### <a name="waf"></a>Web Application Firewall
-
-The [Web Application Firewall](https://docs.edgecast.com/pci-cdn/Content/Web-Security/Web-Application-Firewall-WAF.htm) feature determines whether a request will be screened by Web Application Firewall.
-
-**[Back to the top](#top)**
- For the most recent features, see the [Verizon Rules Engine documentation](https://docs.vdms.com/cdn/https://docsupdatetracker.net/index.html#Quick_References/HRE_QR.htm#Actions). ## Next steps
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
Title: Audio Content Creation - Speech service
-description: Audio Content Creation is an online tool that allows you to customize and fine-tune Microsoft's text-to-speech output for your apps and products.
+description: Audio Content Creation is an online tool that allows you to run text-to-speech synthesis without writing any code.
Previously updated : 01/23/2022 Last updated : 09/25/2022
-# Improve synthesis with the Audio Content Creation tool
+# Speech synthesis with the Audio Content Creation tool
-[Audio Content Creation](https://aka.ms/audiocontentcreation) is an easy-to-use and powerful tool that lets you build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can fine-tune text-to-speech voices and design customized audio experiences in an efficient and low-cost way.
+You can use the [Audio Content Creation](https://aka.ms/audiocontentcreation) tool in Speech Studio for text-to-speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
+
+Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune text-to-speech voices and design customized audio experiences.
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
+- No-code approach: You can use the Audio Content Creation tool for text-to-speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration.
+- Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.
+ You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=stt-tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
-To learn more, view the [Audio Content Creation tutorial video](https://youtu.be/ygApYuOOG6w).
+To learn more, view the Audio Content Creation tutorial video [on YouTube](https://youtu.be/ygApYuOOG6w).
## Get started
-Audio Content Creation is a free tool, but you'll pay for the Speech service that you consume. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
+The Audio Content Creation tool in Speech Studio is free to access, but you'll pay for Speech service usage. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
The next sections cover how to create an Azure account and get a Speech resource.
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
This table lists some of the key pronunciation assessment results.
| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | | `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. | | `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. |
+| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.|
### JSON result example
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Previously updated : 05/13/2022 Last updated : 09/25/2022
In Speech Studio, the following Speech service features are available as project
* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
-* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots, with the easy-to-use [Audio Content Creation](how-to-audio-content-creation.md) tool. With Speech Studio, you can export these audio files to use in your applications.
+* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): A no-code approach for text-to-speech synthesis. You can use the output audio as-is, or as a starting point for further customization. You can build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. For more information, see the [Audio Content Creation](how-to-audio-content-creation.md) documentation.
* [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Speech Synthesis Markup Language (SSML) is an XML-based markup language that lets developers specify how input text is converted into synthesized speech by using text-to-speech. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled.
+> [!TIP]
+> Author plain text and SSML using the [Audio Content Creation](https://aka.ms/audiocontentcreation) tool in Speech Studio. You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
+ The Speech service implementation of SSML is based on the World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/). > [!IMPORTANT]
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Previously updated : 09/16/2022 Last updated : 09/25/2022 keywords: text to speech
Here's more information about neural text-to-speech features in the Speech servi
* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
- You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md).
+ You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md) and [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
Here's more information about neural text-to-speech features in the Speech servi
To get started with text-to-speech, see the [quickstart](get-started-text-to-speech.md). Text-to-speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md).
+> [!TIP]
+> To convert text-to-speech with a no-code approach, try the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
+ ## Sample code Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages:
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
Previously updated : 06/29/2022 Last updated : 09/28/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
Previously updated : 05/27/2022 Last updated : 09/28/2022
In this tutorial, you'll learn how to:
- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/). - A Language resource. If you don't have one, you can [create one](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics). - The [Language resource key](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- Customer comments. You can use our example data or your own data. This tutorial assumes you're using our example data.
+- Customer comments. You can [use our example data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/language-service/tutorials/comments.csv) or your own data. This tutorial assumes you're using our example data.
## Load customer data
communication-services Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/government-cloud.md
+
+ Title: Support for government clouds
+
+description: Support for external user from Azure Communication Services connecting to Microsoft Teams in government clouds
++ Last updated : 9/22/2022+++++
+# Support for government clouds
+Developers can integrate Azure Communication Services to connect to Microsoft Teams also in government clouds. Azure Communication Services allows connecting to Microsoft 365 cloud that meets government security and compliance requirements. The following sections show supported clouds and scenarios for external users from Azure Communication Services.
+
+## Supported cloud parity between Microsoft 365 and Azure
+The following table shows pair of government clouds that are currently supported by Azure Communication
+
+| Microsoft 365 cloud| Azure cloud| Support |
+| | | |
+| [GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc) | Public | ❌ |
+| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](/azure/azure-government/documentation-government-welcome) | ✔️ |
+
+## Supported use cases
+
+The following table show supported use cases for Gov Cloud user with Azure Communication
+
+| Scenario | Supported |
+| | |
+| Join Teams meeting | ✔️ |
+| Join channel Teams meeting [1] | ✔️ |
+| Join Teams webinar | ❌ |
+| [Join Teams live events](/microsoftteams/teams-live-events/what-are-teams-live-events).| ❌ |
+| Join [Teams meeting scheduled in application for personal use](https://www.microsoft.com/microsoft-teams/teams-for-home) | ❌ |
+| Join Teams 1:1 or group call | ❌ |
+| Join Teams 1:1 or group chat | ❌ |
+
+- [1] Gov cloud users can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages
communication-services Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/government-cloud.md
+
+ Title: Support for government clouds as Teams user
+
+description: Support for Teams user from Azure Communication Services connecting to Microsoft Teams in government clouds
++ Last updated : 9/22/2022+++++
+# Support for government clouds as Teams user
+Developers can integrate Azure Communication Services to connect to Microsoft Teams also in government clouds. Azure Communication Services allows connecting to Microsoft 365 cloud that meets government security and compliance requirements. The following sections show supported clouds and scenarios for Teams users.
+
+## Supported cloud parity between Microsoft 365 and Azure
+The following table shows pair of government clouds that are currently supported by Azure Communication
+
+| Microsoft 365 cloud| Azure cloud| Support |
+| | | |
+| [GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc) | Public | ❌ |
+| [GCC-H](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) | [US Government](/azure/azure-government/documentation-government-welcome) | ❌ |
+
+## Supported use cases
+
+The following table show supported use cases for Gov Cloud Teams user with Azure Communication
+
+| Scenario | Supported |
+| | |
+| Make a voice-over-IP (VoIP) call to Teams user | ❌ |
+| Make a phone (PSTN) call | ❌ |
+| Accept incoming voice-over-IP (VoIP) call for Teams user | ❌ |
+| Accept incoming phone (PSTN) for Teams user | ❌ |
+| Join Teams meeting | ❌ |
+| Join channel Teams meeting | ❌ |
+| Join Teams webinar | ❌ |
+| [Join Teams live events](/microsoftteams/teams-live-events/what-are-teams-live-events).| ❌ |
+| Join [Teams meeting scheduled in an application for personal use](https://www.microsoft.com/microsoft-teams/teams-for-home) | ❌ |
+| Join Teams 1:1 or group call | ❌ |
+| Send a message to 1:1 chat, group chat or Teams meeting chat| ❌ |
+| Get messages from 1:1 chat, group chat or Teams meeting chat | ❌ |
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Use rooms when you need any of the following capabilities:
The picture below illustrates the concept of managing and joining the rooms. ### Rooms API/SDKs
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration + Last updated 09/14/2022
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
ms.suite: integration
Last updated 09/07/2022+ tags: connectors
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
ms.suite: integration
Last updated 09/13/2022+ tags: connectors
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
tags: connectors
# Connect to an SQL database from workflows in Azure Logic Apps + This article shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources. For example, your workflow can run actions that get, insert, and delete data or that can run SQL queries and stored procedures. Your workflow can check for new records in a non-SQL database, do some processing work, use the results to create new records in your SQL database, and send email alerts about the new records.
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql). <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use either an SQL managed connector trigger or a different trigger. <br><br>- The built-in version connects directly to an SQL server and database requiring only a connection string. You don't need the on-premises data gateway. <br><br>- The built-in version can directly access Azure virtual networks. You don't need the on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use the SQL managed connector trigger or a different trigger. <br><br>- The built-in version can connect directly to an SQL database and access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
## Limitations
For more information, review the [SQL Server managed connector reference](/conne
> If you use an SQL Server connection string that you copied directly from the Azure portal, > you have to manually add your password to the connection string.
- * For an SQL database in Azure, the connection string has the following format:
+ * For an SQL database in Azure, the connection string has the following format:
`Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog={your-database-name};Persist Security Info=False;User ID={your-user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;`
For more information, review the [SQL Server managed connector reference](/conne
* Standard logic app workflow
- You can use the SQL Server built-in connector, which requires a connection string. The built-in connector currently supports only SQL Server Authentication. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
+ You can use the SQL Server built-in connector or managed connector.
- To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+ * To use the built-in connector, you can authenticate your connection with either a managed identity, Active Directory OAuth, or a connection string. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
- For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
+ * To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps. For other connector requirements, review the [SQL Server managed connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
In the connection information box, complete the following steps:
| Authentication | Description | |-|-| | **Connection String** | - Supported only in Standard workflows with the SQL Server built-in connector. <br><br>- Requires the connection string to your SQL server and database. |
- | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. In Standard workflows, this authentication type is available for the SQL Server built-in connector, but the option is named **Managed identity** instead. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
- | **Active Directory OAuth** | - Supported only in Standard workflows with the SQL Server built-in connector. For more information, see the following documentation: <br><br>- [Enable Azure Active Directory Open Authentication (Azure AD OAuth)](../logic-apps/logic-apps-securing-a-logic-app.md#enable-oauth) <br>- [Azure Active Directory Open Authentication](../logic-apps/logic-apps-securing-a-logic-app.md#azure-active-directory-oauth-authentication) |
+ | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. In Standard workflows, this authentication type is available for the SQL Server built-in connector, but the option is named **Managed identity** instead. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see the following documentation: <br><br>- [Managed identity authentication for SQL Server connector](/connectors/sql/#managed-identity-authentication) <br>- [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles) |
+ | **Active Directory OAuth** | - Supported only in Standard workflows with the SQL Server built-in connector. For more information, see the following documentation: <br><br>- [Authentication for SQL Server connector](/connectors/sql/#authentication) <br>- [Enable Azure Active Directory Open Authentication (Azure AD OAuth)](../logic-apps/logic-apps-securing-a-logic-app.md#enable-oauth) <br>- [Azure Active Directory Open Authentication](../logic-apps/logic-apps-securing-a-logic-app.md#azure-active-directory-oauth-authentication) |
| **Service principal (Azure AD application)** | - Supported with the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). | | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) | | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
- The following examples show how the connection information box might appear if you select **Azure AD Integrated** authentication.
+ The following examples show how the connection information box might appear if you use the SQL Server *managed* connector and select **Azure AD Integrated** authentication:
* Consumption logic app workflows
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration + Last updated 09/02/2022
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
ms.suite: integration + Last updated 09/07/2022
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Features include:
## Configuration
-The following is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
+The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
```json "containers": [
The following is an example of the `containers` array in the [`properties.templa
| `volumeMounts` | An array of volume mount definitions. | You can define a temporary volume or multiple permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).| | `probes`| An array of health probes enabled in the container. | This feature is based on Kubernetes health probes. For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).|
-When allocating resources, the total amount of CPUs and memory requested for all the containers in a container app must add up to one of the following combinations.
+The total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
| vCPUs (cores) | Memory | |||
To use a container registry, you define the required fields in `registries` arra
} ```
-With the registry information set up, the saved credentials can be used to pull a container image from the private registry when your app is deployed.
+With the registry information added, the saved credentials can be used to pull a container image from the private registry when your app is deployed.
The following example shows how to configure Azure Container Registry credentials in a container app.
The following example shows how to configure Azure Container Registry credential
### Managed identity with Azure Container Registry
-You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. To use a managed identity:
+You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. For more information, see [Managed identities in Azure Container Apps](managed-identity.md).
-- Assign a system-assigned or user-assigned managed identity to your container app.-- Specify the managed identity you want to use for each registry.-
-> [!NOTE]
-> You will need to [enable an admin user account](../container-registry/container-registry-authentication.md) in your Azure
-> Container Registry even when you use an Azure managed identity. You will not need to use the ACR admin credentials to pull images into Azure
-> Container Apps, however, it is a prequisite to have the ACR admin user account enabled in the registry Azure Container Apps is pulling from.
-
-When assigned a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or "system" for the system-assigned identity. For more information about using managed identities see, [Managed identities in Azure Container Apps Preview](managed-identity.md).
+When assigning a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or "system" for the system-assigned identity.
```json {
When assigned a managed identity to a registry, use the managed identity resourc
} ```
-The managed identity must have `AcrPull` access for the Azure Container Registry. For more information about assigning Azure Container Registry permissions to managed identities, see [Authenticate with managed identity](../container-registry/container-registry-authentication-managed-identity.md).
-
-#### Configure a user-assigned managed identity
-
-To configure a user-assigned managed identity:
-
-1. Create the user-assigned identity if it doesn't exist.
-1. Give the user-assigned identity `AcrPull` permission to your private repository.
-1. Add the identity to your container app configuration as shown above.
- For more information about configuring user-assigned identities, see [Add a user-assigned identity](managed-identity.md#add-a-user-assigned-identity).
-#### Configure a system-assigned managed identity
-
-System-assigned identities are created at the time your container app is created, and therefore, won't have `AcrPull` access to your Azure Container Registry. As a result, the image can't be pulled from your private registry when your app is first deployed.
-
-To configure a system-assigned identity, you must use one of the following methods.
--- **Option 1**: Use a public registry for the initial deployment:
- 1. Create your container app using a public image and a system-assigned identity.
- 1. Give the new system-assigned identity `AcrPull` access to your private Azure Container Registry.
- 1. Update your container app replacing the public image with the image from your private Azure Container Registry.
-- **Option 2**: Restart your app after assigning permissions:
- 1. Create your container app using a private image and a system-assigned identity. (The deployment will result in a failure to pull the image.)
- 1. Give the new system-assigned identity `AcrPull` access to your private Azure Container Registry.
- 1. Restart your container app revision.
-
-For more information about configuring system-assigned identities, see [Add a system-assigned identity](managed-identity.md#add-a-system-assigned-identity).
- ## Limitations Azure Container Apps has the following limitations:
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
+
+ Title: Azure Container Apps image pull from Azure Container Registry with managed identity
+description: Set up Azure Container Apps to authenticate Azure Container Registry image pulls with managed identity
++++ Last updated : 09/16/2022+
+zone_pivot_groups: container-apps-interface-types
++
+# Azure Container Apps image pull with managed identity
+
+You can pull images from private repositories in Microsoft Azure Container Registry using managed identities for authentication to avoid the use of administrative credentials. You can use a system-assigned or user-assigned managed identity to authenticate with Azure Container Registry.
+
+With a system-assigned managed identity, the identity is created and managed by Azure Container Apps. The identity is tied to your container app and is deleted when your app is deleted. With a user-assigned managed identity, you create and manage the identity outside of Azure Container Apps. It can be assigned to multiple Azure resources, including Azure Container Apps.
++
+This article describes how to use the Azure portal to configure your container app to use user-assigned and system-assigned managed identities to pull images from private Azure Container Registry repositories.
+
+## User-assigned managed identity
+
+The following steps describe the process to configure your container app to use a user-assigned managed identity to pull images from private Azure Container Registry repositories.
+
+1. Create a container app with a public image.
+1. Add the user-assigned managed identity to the container app.
+1. Create a container app revision with a private image and the system-assigned managed identity.
+
+### Prerequisites
+
+- An Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- A private Azure Container Registry containing an image you want to pull.
+- Create a user-assigned managed identity. For more information, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
+
+### Create a container app
+
+Use the following steps to create a container app with the default quickstart image.
+
+1. Navigate to the portal **Home** page.
+1. Search for **Container Apps** in the top search bar.
+1. Select **Container Apps** in the search results.
+1. Select the **Create** button.
+1. In the *Basics* tab, do the following actions.
+
+ | Setting | Action |
+ |||
+ | **Subscription** | Select your Azure subscription. |
+ | **Resource group** | Select an existing resource group or create a new one. |
+ | **Container app name** | Enter a container app name. |
+ | **Location** | Select a location. |
+ | **Create Container App Environment** | Create a new or select an existing environment. |
+
+1. Select the **Review + Create** button at the bottom of the **Create Container App** page.
+1. Select the **Create** button at the bottom of the **Create Container App** window.
+
+Allow a few minutes for the container app deployment to finish. When deployment is complete, select **Go to resource**.
+
+### Add the user-assigned managed identity
+
+1. Select **Identity** from the left menu.
+1. Select the **User assigned** tab.
+1. Select the **Add user assigned managed identity** button.
+1. Select your subscription.
+1. Select the identity you created.
+1. Select **Add**.
+
+### Create a container app revision
+
+Create a container app revision with a private image and the system-assigned managed identity.
+
+1. Select **Revision Management** from the left menu.
+1. Select **Create new revision**.
+1. Select the container image from the **Container Image** table.
+1. Enter the information in the *Edit a container* dialog.
+
+ |Field|Action|
+ |--||
+ |**Name**|Enter a name for the container.|
+ |**Image source**|Select **Azure Container Registry**.|
+ |**Authentication**|Select **Managed Identity**.|
+ |**Identity**|Select the identity you created from the drop-down menu.|
+ |**Registry**|Select the registry you want to use from the drop-down menu.|
+ |**Image**|Enter the name of the image you want to use.|
+ |**Image Tag**|Enter the name and tag of the image you want to pull.|
+
+ :::image type="content" source="media/managed-identity/screenshot-edit-a-container-user-assigned-identity.png" alt-text="Screen shot of the Edit a container dialog entering user assigned managed identity.":::
+ >[!NOTE]
+ > If the administrative credentials are not enabled on your Azure Container Registry registry, you will see a warning message displayed and you will need to enter the image name and tag information manually.
+
+1. Select **Save**.
+1. Select **Create** from the **Create and deploy new revision** page.
+
+A new revision will be created and deployed. The portal will automatically attempt to add the `acrpull` role to the user-assigned managed identity. If the role isn't added, you can add it manually.
+
+### Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
+
+>[!WARNING]
+>Deleting the resource group will delete all the resources in the group. If you have other resources in the group, they will also be deleted. If you want to keep the resources, you can delete the container app instance and the container app environment.
+
+1. Select your resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name in the confirmation dialog.
+1. Select **Delete**.
+ The process to delete the resource group may take a few minutes to complete.
+
+## System-assigned managed identity
+
+The method for configuring a system-assigned managed identity in the Azure portal is the same as configuring a user-assigned managed identity. The only difference is that you don't need to create a user-assigned managed identity. Instead, the system-assigned managed identity is created when you create the container app.
+
+The method to configure a system-assigned managed identity in the Azure portal is:
+
+1. Create a container app with a public image.
+1. Create a container app revision with a private image and the system-assigned managed identity.
+
+### Prerequisites
+
+- An Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- A private Azure Container Registry containing an image you want to pull. See [Create a private Azure Container Registry](../container-registry/container-registry-get-started-portal.md#create-a-container-registry).
+
+### Create a container app
+
+Follow these steps to create a container app with the default quickstart image.
+
+1. Navigate to the portal **Home** page.
+1. Search for **Container Apps** in the top search bar.
+1. Select **Container Apps** in the search results.
+1. Select the **Create** button.
+1. In the **Basics** tab, do the following actions.
+
+ | Setting | Action |
+ |||
+ | **Subscription** | Select your Azure subscription. |
+ | **Resource group** | Select an existing resource group or create a new one. |
+ | **Container app name** | Enter a container app name. |
+ | **Location** | Select a location. |
+ | **Create Container App Environment** | Create a new or select an existing environment. |
+
+1. Select the **Review + Create** button at the bottom of the **Create Container App** page.
+1. Select the **Create** button at the bottom of the **Create Container App** page.
+
+Allow a few minutes for the container app deployment to finish. When deployment is complete, select **Go to resource**.
+
+### Edit and deploy a revision
+
+Edit the container to use the image from your private Azure Container Registry, and configure the authentication to use system-assigned identity.
+
+1. The **Containers** from the side menu on the left.
+1. Select **Edit and deploy**.
+1. Select the *simple-hello-world-container* container from the list.
+
+ | Setting | Action |
+ |||
+ |**Name**| Enter the container app name. |
+ |**Image source**| Select **Azure Container Registry**. |
+ |**Authentication**| Select **Managed identity**. |
+ |**Identity**| Select **System assigned**. |
+ |**Registry**| Enter the Registry name. |
+ |**Image**| Enter the image name. |
+ |**Image tag**| Enter the tag. |
+
+ :::image type="content" source="media/managed-identity/screenshot-edit-a-container-system-assigned-identity.png" alt-text="Screen shot Edit a container with system-assigned managed identity.":::
+ >[!NOTE]
+ > If the administrative credentials are not enabled on your Azure Container Registry registry, you will see a warning message displayed and you will need to enter the image name and tag information manually.
+
+1. Select **Save** at the bottom of the page.
+1. Select **Create** at the bottom of the **Create and deploy new revision** page
+1. After a few minutes, select **Refresh** on the **Revision management** page to see the new revision.
+
+### Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
+
+>[!WARNING]
+>Deleting the resource group will delete all the resources in the group. If you have other resources in the group, they will also be deleted. If you want to keep the resources, you can delete the container app instance and the container app environment.
+
+1. Select your resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name in the confirmation dialog.
+1. Select **Delete**.
+ The process to delete the resource group may take a few minutes to complete.
++
+This article describes how to configure your container app to use managed identities to pull images from a private Azure Container Registry repository using Azure CLI and Azure PowerShell.
+
+## Prerequisites
+
+| Prerequisite | Description |
+|--|-|
+| Azure account | An Azure account with an active subscription. If you don't have one, you can [can create one for free](https://azure.microsoft.com/free/). |
+| Azure CLI | If using Azure CLI, [install the Azure CLI](/cli/azure/install-azure-cli) on your local machine. |
+| Azure PowerShell | If using PowerShell, [install the Azure PowerShell](/powershell/azure/install-az-ps) on your local machine. Ensure that the latest version of the Az.App module is installed by running the command `Install-Module -Name Az.App`. |
+|Azure Container Registry | A private Azure Container Registry containing an image you want to pull. [Quickstart: Create a private container registry using the Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) or [Quickstart: Create a private container registry using Azure PowerShell](../container-registry/container-registry-get-started-powershell.md)|
+
+## Setup
+
+First, sign in to Azure from the CLI or PowerShell. Run the following command, and follow the prompts to complete the authentication process.
++
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az login
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Connect-AzAccount
+```
+++
+# [Azure CLI](#tab/azure-cli)
+
+Install the Azure Container Apps extension for the CLI.
+
+```azurecli
+az extension add --name containerapp --upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+You must have the latest Az PowerShell module installed. Ignore any warnings about modules currently in use.
+
+```azurepowershell
+Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
+```
+
+Now install the Az.App module.
+
+```azurepowershell
+Install-Module -Name Az.App
+```
+++
+Now that the current extension or module is installed, register the `Microsoft.App` namespace and the `Microsoft.OperationalInsights` provider if you haven't register them before.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az provider register --namespace Microsoft.App
+az provider register --namespace Microsoft.OperationalInsights
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.App
+Register-AzResourceProvider -ProviderNamespace Microsoft.OperationalInsights
+```
+++
+# [Azure CLI](#tab/azure-cli)
+
+Next, set the following environment variables. Replace the *\<PLACEHOLDERS\>* with your own values.
+
+```azurecli
+RESOURCE_GROUP="<YOUR_RESOURCE_GROUP_NAME>"
+LOCATION="<YOUR_LOCATION>"
+CONTAINERAPPS_ENVIRONMENT="<YOUR_ENVIRONMENT_NAME>"
+REGISTRY_NAME="<YOUR_REGISTRY_NAME>"
+CONTAINERAPP_NAME="<YOUR_CONTAINERAPP_NAME>"
+IMAGE_NAME="<YOUR_IMAGE_NAME>"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Next, set the following environment variables. Replace the *\<Placeholders\>* with your own values.
+
+```azurepowershell
+$ResourceGroupName = '<YourResourceGroupName>'
+$Location = '<YourLocation>'
+$ContainerAppsEnvironment = '<YourEnvironmentName>'
+$RegistryName = '<YourRegistryName>'
+$ContainerAppName = '<YourContainerAppName>'
+$ImageName = '<YourImageName>'
+```
+++
+If you already have a resource group, skip this step. Otherwise, create a resource group.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP \
+ --location $LOCATION
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Location $Location -Name $ResourceGroupName
+```
+++
+### Create a container app environment
+
+If the environment doesn't exist, run the following command:
+
+# [Azure CLI](#tab/azure-cli)
+
+To create the environment, run the following command:
+
+```azurecli
+az containerapp env create \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+Continue to the next section to configure user-assigned managed identity or skip to the [System-assigned managed identity](#system-assigned-managed-identity-1) section.
+
+## User-assigned managed identity
+
+Follow this procedure to configure user-assigned managed identity:
+
+1. Create a user-assigned managed identity.
+1. If you're using PowerShell, assign a `acrpull` role for your registry to the managed identity. The Azure CLI automatically makes this assignment.
+1. Create a container app with the image from the private registry that is authenticated with the user-assigned managed identity.
+
+### Create a user-assigned managed identity
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a user-assigned managed identity. Replace the *\<PLACEHOLDERS\>* with the name of your managed identity.
+
+```azurecli
+IDENTITY="<YOUR_IDENTITY_NAME>"
+```
+
+```azurecli
+az identity create \
+ --name $IDENTITY \
+ --resource-group $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Create a user-assigned managed identity. Replace the *\<Placeholders\>* with the name of your managed identity.
+
+```azurepowershell
+$IdentityName = '<YourIdentityName>'
+```
+
+```azurepowershell
+New-AzUserAssignedIdentity -Name $IdentityName -ResourceGroupName $ResourceGroupName -Location $Location
+```
+++
+# [Azure CLI](#tab/azure-cli)
+
+Get identity's resource ID.
+
+```azurecli
+IDENTITY_ID=`az identity show \
+ --name $IDENTITY \
+ --resource-group $RESOURCE_GROUP \
+ --query id`
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Get the identity's resource and principal ID.
+
+```azurepowershell
+$IdentityId = (Get-AzUserAssignedIdentity -Name $IdentityName -ResourceGroupName $ResourceGroupName).Id
+$PrincipalId = (Get-AzUserAssignedIdentity -Name $IdentityName -ResourceGroupName $ResourceGroupName).PrincipalId
+```
+
+Get the registry's resource ID. Replace the *\<placeholders\>* with the resource group name for your registry.
+
+```azurepowershell
+$RegistryId = (Get-AzContainerRegistry -ResourceGroupName <RegistryResourceGroup> -Name $RegistryName).Id
+```
+
+Create the `acrpull` role assignment for the identity.
+
+```azurepowershell
+New-AzRoleAssignment -ObjectId $PrincipalId -Scope $RegistryId -RoleDefinitionName acrpull
+```
+++
+### Create a container app
+
+Create your container app with your image from the private registry authenticated with the identity.
+
+# [Azure CLI](#tab/azure-cli)
+
+Copy the identity's resource ID to paste into the *\<IDENTITY_ID\>* placeholders in the command below.
+
+```azurecli
+echo $IDENTITY_ID
+```
+
+```azurecli
+az containerapp create \
+ --name $CONTAINERAPP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $CONTAINERAPPS_ENVIRONMENT \
+ --user-assigned <IDENTITY_ID> \
+ --registry-identity <IDENTITY_ID> \
+ --registry-server "$REGISTRY_NAME.azurecr.io" \
+ --image "$REGISTRY_NAME.azurecr.io/$IMAGE_NAME:latest"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$CredentialArgs = @{
+ Server = $RegistryName + '.azurecr.io'
+ Identity = $IdentityId
+}
+$CredentialObject = New-AzContainerAppRegistryCredentialObject @CredentialArgs
+$ImageParams = @{
+ Name = 'my-container-app'
+ Image = $RegistryName + '.azurecr.io/' + $ImageName + ':latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
+
+$AppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ ConfigurationRegistry = $CredentialObject
+ IdentityType = 'UserAssigned'
+ IdentityUserAssignedIdentity = @{ $IdentityId = @{ } }
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+}
+New-AzContainerApp @AppArgs
+```
+++
+### Clean up
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```
+++
+## System-assigned managed identity
+
+To configure a system-assigned identity, you'll need to:
+
+1. Create a container app with a public image.
+1. Assign a system-assigned managed identity to the container app.
+1. Update the container app with the private image.
+
+### Create a container app
+
+Create a container with a public image.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az containerapp create \
+ --name $CONTAINERAPP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $CONTAINERAPPS_ENVIRONMENT \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress external
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+$ImageParams = @{
+ Name = "my-container-app"
+ Image = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
+
+$AppArgs = @{
+ Name = "my-container-app"
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ IdentityType = "SystemAssigned"
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+
+}
+New-AzContainerApp @AppArgs
+```
+++
+### Update the container app
+
+Update the container app with the image from your private container registry and add a system-assigned identity to authenticate the Azure Container Registry pull. You can also include other settings necessary for your container app, such as ingress, scale and Dapr settings.
+
+If you are using an image tag other than `latest`, replace the `latest` value with your value.
++
+# [Azure CLI](#tab/azure-cli)
+
+Set the registry server and turn on system-assigned managed identity in the container app.
+
+```azurecli
+az containerapp registry set \
+ --name $CONTAINERAPP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --identity system \
+ --server "$REGISTRY_NAME.azurecr.io"
+```
++
+```azurecli
+az containerapp update \
+ --name $CONTAINERAPP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --image "$REGISTRY_NAME.azurecr.io/$IMAGE_NAME:latest"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+$CredentialArgs = @{
+ Server = $RegistryName + '.azurecr.io'
+ Identity = 'system'
+}
+$CredentialObject = New-AzContainerAppRegistryCredentialObject @CredentialArgs
+$ImageParams = @{
+ Name = 'my-container-app'
+ Image = $RegistryName + ".azurecr.io/" + $ImageName + ":latest"
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+
+$AppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ConfigurationRegistry = $CredentialObject
+ IdentityType = 'SystemAssigned'
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+}
+Update-AzContainerApp @AppArgs
+```
+++
+### Clean up
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```
+++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Managed identities in Azure Container Apps](managed-identity.md)
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Your container app can be granted two types of identities:
## Why use a managed identity?
-You can use a managed identity in a running container app to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
-
-With managed identities:
+- **Authentication service options**: You can use a managed identity in a running container app to authenticate to any [service that supports Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
- Your app connects to resources with the managed identity. You don't need to manage credentials in your container app. - You can use role-based access control to grant specific permissions to a managed identity. - System-assigned identities are automatically created and managed. They're deleted when your container app is deleted. - You can add and delete user-assigned identities and assign them to multiple resources. They're independent of your container app's life cycle.-- You can use managed identity to [authenticate with a private Azure Container Registry](containers.md#container-registries) without a username and password to pull containers for your Container App.-
+- You can use managed identity to pull images from a private Azure Container Registry without a username and password. For more information, see [Azure Container Apps image pull with managed identity](managed-identity-image-pull.md).
### Common use cases
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
The rest of the document covers the steps required to encrypt your ACI deploymen
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-## Encrypt data with a customer-managed key
+This article reviews two flows for encrypting data with a customer-managed key:
+* Encrypt data with a customer-managed key stored in a standard Azure Key Vault
+* Encrypt data with a customer-managed key stored in a network-protected Azure Key Vault with [Trusted Services](../key-vault/general/network-security.md) enabled.
+
+## Encrypt data with a customer-managed key stored in a standard Azure Key Vault
### Create Service Principal for ACI
az deployment group create --resource-group myResourceGroup --template-file depl
Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
+## Encrypt data with a customer-managed key in a network protected Azure Key Vault with Trusted Services enabled
+
+### Create a Key Vault resource
+
+Create an Azure Key Vault using [Azure portal](../key-vault/general/quick-create-portal.md), [Azure CLI](../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../key-vault/general/quick-create-powershell.md). To start, do not apply any network-limitations so we can add necessary keys to the vault. In subsequent steps, we will add network-limitations and enable trusted services.
+
+For the properties of your key vault, use the following guidelines:
+* Name: A unique name is required.
+* Subscription: Choose a subscription.
+* Under Resource Group, either choose an existing resource group, or create new and enter a resource group name.
+* In the Location pull-down menu, choose a location.
+* You can leave the other options to their defaults or pick based on additional requirements.
+
+> [!IMPORTANT]
+> When using customer-managed keys to encrypt an ACI deployment template, it is recommended that the following two properties be set on the key vault, Soft Delete and Do Not Purge. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
+
+### Generate a new key
+
+Once your key vault is created, navigate to the resource in Azure portal. On the left navigation menu of the resource blade, under Settings, click **Keys**. On the view for "Keys," click "Generate/Import" to generate a new key. Use any unique Name for this key, and any other preferences based on your requirements. Make sure to capture key name and version for subsequent steps.
+
+![Screenshot of key creation settings, PNG.](./media/container-instances-encrypt-data/generate-key.png)
+
+### Create a user-assigned managed identity for your container group
+Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group used to create the key vault, or use a different one.
+
+```azurecli-interactive
+az identity create \
+ --resource-group myResourceGroup \
+ --name myACIId
+```
+
+To use the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's service principal ID and resource ID in variables.
+
+```azurecli-interactive
+# Get service principal ID of the user-assigned identity
+spID=$(az identity show \
+ --resource-group myResourceGroup \
+ --name myACIId \
+ --query principalId --output tsv)
+```
+
+### Set access policy
+
+Create a new access policy for allowing the user-assigned identity to access and unwrap your key for encryption purposes.
+
+```azurecli-interactive
+az keyvault set-policy \
+ --name mykeyvault \
+ --resource-group myResourceGroup \
+ --object-id $spID \
+ --key-permissions get unwrapKey
+ ```
+
+### Modify Azure Key Vault's network permissions
+The following commands set up an Azure Firewall for your Azure Key Vault and allow Azure Trusted Services such as ACI access.
+
+```azurecli-interactive
+az keyvault update \
+ --name mykeyvault \
+ --resource-group myResourceGroup \
+ --default-action Deny
+ ```
+
+```azurecli-interactive
+az keyvault update \
+ --name mykeyvault \
+ --resource-group myResourceGroup \
+ --bypass AzureServices
+ ```
+
+### Modify your JSON deployment template
+
+> [!IMPORTANT]
+> Encrypting deployment data with a customer-managed key is available in the 2022-09-01 API version or newer. The 2022-09-01 API version is only available via ARM or REST. If you have any issues with this, please reach out to Azure Support.
+
+Once the key vault key and access policy are set up, add the following properties to your ACI deployment template. Learn more about deploying ACI resources with a template in the [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
+* Under `resources`, set `apiVersion` to `2022-09-01`.
+* Under the container group properties section of the deployment template, add an `encryptionProperties`, which contains the following values:
+ * `vaultBaseUrl`: the DNS Name of your key vault. This can be found on the overview blade of the key vault resource in Portal
+ * `keyName`: the name of the key generated earlier
+ * `keyVersion`: the current version of the key. This can be found by clicking into the key itself (under "Keys" in the Settings section of your key vault resource)
+ * `identity`: this is the resource URI of the Managed Identity instance created earlier
+* Under the container group properties, add a `sku` property with value `Standard`. The `sku` property is required in API version 2022-09-01.
+* Under resources, add the `identity` object required to use Managed Identity with ACI, which contains the following values:
+ * `type`: the type of the identity being used (either user-assigned or system-assigned). This case will be set to "UserAssigned"
+ * `userAssignedIdentities`: the resourceURI of the same user-assigned identity used above in the `encryptionProperties` object.
+
+The following template snippet shows these additional properties to encrypt deployment data:
+
+```json
+[...]
+"resources": [
+ {
+ "name": "[parameters('containerGroupName')]",
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2019-12-01",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
+ }
+ },
+ "properties": {
+ "encryptionProperties": {
+ "vaultBaseUrl": "https://example.vault.azure.net",
+ "keyName": "acikey",
+ "keyVersion": "xxxxxxxxxxxxxxxx",
+ "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
+ },
+ "sku": "Standard",
+ "containers": {
+ [...]
+ }
+ }
+ }
+]
+```
+
+Following is a complete template, adapted from the template in [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "containerGroupName": {
+ "type": "string",
+ "defaultValue": "myContainerGroup",
+ "metadata": {
+ "description": "Container Group name."
+ }
+ }
+ },
+ "variables": {
+ "container1name": "aci-tutorial-app",
+ "container1image": "mcr.microsoft.com/azuredocs/aci-helloworld:latest",
+ "container2name": "aci-tutorial-sidecar",
+ "container2image": "mcr.microsoft.com/azuredocs/aci-tutorial-sidecar"
+ },
+ "resources": [
+ {
+ "name": "[parameters('containerGroupName')]",
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2022-09-01",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
+ }
+ },
+ "properties": {
+ "encryptionProperties": {
+ "vaultBaseUrl": "https://example.vault.azure.net",
+ "keyName": "acikey",
+ "keyVersion": "xxxxxxxxxxxxxxxx",
+ "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
+ },
+ "sku": "Standard",
+ "containers": [
+ {
+ "name": "[variables('container1name')]",
+ "properties": {
+ "image": "[variables('container1image')]",
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGb": 1.5
+ }
+ },
+ "ports": [
+ {
+ "port": 80
+ },
+ {
+ "port": 8080
+ }
+ ]
+ }
+ },
+ {
+ "name": "[variables('container2name')]",
+ "properties": {
+ "image": "[variables('container2image')]",
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGb": 1.5
+ }
+ }
+ }
+ }
+ ],
+ "osType": "Linux",
+ "ipAddress": {
+ "type": "Public",
+ "ports": [
+ {
+ "protocol": "tcp",
+ "port": "80"
+ },
+ {
+ "protocol": "tcp",
+ "port": "8080"
+ }
+ ]
+ }
+ }
+ }
+ ],
+ "outputs": {
+ "containerIPv4Address": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
+ }
+ }
+}
+```
+
+### Deploy your resources
+
+If you created and edited the template file on your desktop, you can upload it to your Cloud Shell directory by dragging the file into it.
+
+Create a resource group with the [az group create][az-group-create] command.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+```
+
+Deploy the template with the [az deployment group create][az-deployment-group-create] command.
+
+```azurecli-interactive
+az deployment group create --resource-group myResourceGroup --template-file deployment-template.json
+```
+
+Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
<!-- LINKS - Internal --> [az-group-create]: /cli/azure/group#az_group_create [az-deployment-group-create]: /cli/azure/deployment/group/#az_deployment_group_create
container-registry Container Registry Tutorial Build Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md
az acr task create \
--registry $ACR_NAME \ --name taskhelloworld \ --image helloworld:{{.Run.ID}} \
- --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \
+ --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \
--file Dockerfile \ --git-access-token $GIT_PAT ```
Output from a successful [az acr task create][az-acr-task-create] command is sim
"step": { "arguments": [], "baseImageDependencies": null,
- "contextPath": "https://github.com/gituser/acr-build-helloworld-node#master",
+ "contextPath": "https://github.com/gituser/acr-build-helloworld-node#main",
"dockerFilePath": "Dockerfile", "imageNames": [ "helloworld:{{.Run.ID}}"
Output from a successful [az acr task create][az-acr-task-create] command is sim
"name": "defaultSourceTriggerName", "sourceRepository": { "branch": "main",
- "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node#master",
+ "repositoryUrl": "https://github.com/gituser/acr-build-helloworld-node#main",
"sourceControlAuthProperties": null, "sourceControlType": "GitHub" },
Next, execute the following commands to create, commit, and push a new file to y
echo "Hello World!" > hello.txt git add hello.txt git commit -m "Testing ACR Tasks"
-git push origin master
+git push origin main
``` You may be asked to provide your GitHub credentials when you execute the `git push` command. Provide your GitHub username, and enter the personal access token (PAT) that you created earlier for the password.
container-registry Github Action Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/github-action-scan.md
Get started with the [GitHub Actions](https://docs.github.com/en/actions/learn-g
With GitHub Actions, you can speed up your CI/CD process by building, scanning, and pushing images to a public or private [Container Registry](https://azure.microsoft.com/services/container-registry/) from your workflows.
-In this article, we'll make use of the [Container image scan](https://github.com/marketplace/actions/container-image-scan) from the [GitHub Marketplace](https://github.com/marketplace).
+In this article, we'll make use of the [Container image scan](https://github.com/marketplace/actions/test-container-image-scan) from the [GitHub Marketplace](https://github.com/marketplace).
## Prerequisites
Build and tag the image using the following snippet in the workflow. The followi
## Scan the image
-Before pushing the built image into the container registry, make sure you scan and check the image for any vulnerabilities by using the [Container image scan action](https://github.com/marketplace/actions/container-image-scan).
+Before pushing the built image into the container registry, make sure you scan and check the image for any vulnerabilities by using the [Container image scan action](https://github.com/marketplace/actions/test-container-image-scan).
Add the following code snippet into the workflow, which will scan the image for any ***critical vulnerabilities*** such that the image that will be pushed is secure and complies with the standards.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
How to enable analytical store on a container:
* From the Azure Management SDK, Azure Cosmos DB SDKs, PowerShell, or Azure CLI, the ATTL option can be enabled by setting it to either -1 or 'n' seconds.
-To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md#create-analytical-ttl).
+To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md).
## Cost-effective analytics on historical data
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
description: Learn how to enable Synapse link for Azure Cosmos DB accounts, crea
Previously updated : 11/02/2021 Last updated : 09/26/2022
Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos DB API for Mongo DB accounts. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB: * [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link)
-* [Create an analytical store enabled container](#create-analytical-ttl)
-* [Enable analytical store in an existing container](#update-analytical-ttl)
-* [Optional - Update analytical store ttl for a container](#update-analytical-ttl)
-* [Optional - Disable analytical store in a container](#disable-analytical-store)
+* [Enable Azure Synapse Link for your containers](#update-analytical-ttl)
* [Connect your Azure Cosmos database to an Azure Synapse workspace](#connect-to-cosmos-database)
-* [Query the analytical store using Azure Synapse Spark Pool](#query-analytical-store-spark)
-* [Query the analytical store using Azure Synapse serverless SQL pool](#query-analytical-store-sql-on-demand)
+* [Query analytical store using Azure Synapse Analytics](#query)
+* [Improve performance with Best Practices](#best)
* [Use Azure Synapse serverless SQL pool to analyze and visualize data in Power BI](#analyze-with-powerbi) You can also checkout the training module on how to [configure Azure Synapse Link for Azure Cosmos DB.](/training/modules/configure-azure-synapse-link-with-azure-cosmos-db/) ## <a id="enable-synapse-link"></a>Enable Azure Synapse Link for Azure Cosmos DB accounts
+The first step to use Synapse Link is to enable it for your Azure Cosmos DB database account. This is an one time operation.
+ > [!NOTE] > If you want to use customer-managed keys with Azure Synapse Link, you must configure your account's managed identity in your Azure Key Vault access policy before enabling Synapse Link on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
You can also checkout the training module on how to [configure Azure Synapse Lin
> [!NOTE] > Turning on Synapse Link does not turn on the analytical store automatically. Once you enable Synapse Link on the Cosmos DB account, enable analytical store on containers to start using Synapse Link.
+> [!NOTE]
+> You can also enable Synapse Link for your account using the **Power BI** and the **Synapse Link** pane, in the **Integrations** section of the left navigation menu.
+ ### Command-Line Tools Enable Synapse Link in your Cosmos DB SQL API or MongoDB API account using Azure CLI or PowerShell.
Use `EnableAnalyticalStorage true` for both **create** or **update** operations.
* [Create a new Azure Cosmos DB account with Synapse Link enabled](/powershell/module/az.cosmosdb/new-azcosmosdbaccount#description) * [Update an existing Azure Cosmos DB account to enable Synapse Link](/powershell/module/az.cosmosdb/update-azcosmosdbaccount)
+#### Azure Resource Manager template
+
+This [Azure Resource Manager template](./manage-with-templates.md#azure-cosmos-account-with-analytical-store) creates a Synapse Link enabled Azure Cosmos DB account for SQL API. This template creates a Core (SQL) API account in one region with a container configured with analytical TTL enabled, and an option to use manual or autoscale throughput. To deploy this template, click on **Deploy to Azure** on the readme page.
+
+## <a id="update-analytical-ttl"></a> Enable Azure Synapse Link for your containers
+
+The second step is to enable Synapse Link for your containers or collections. This is accomplished by setting the `analytical TTL` property to `-1` for infinite retention, or to a positive integer, that is the number of seconds that you want to keep in analytical store. This setting can be changed later. For more information, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
+
+Please note the following details when enabling Azure Synapse Link on your existing SQL API containers:
-## <a id="create-analytical-ttl"></a> Create an analytical store enabled container
+* The same performance isolation of the analytical store auto-sync process applies to the initial sync and there is no performance impact on your OLTP workload.
+* A container's initial sync with analytical store total time will vary depending on the data volume and on the documents complexity. This process can take anywhere from a few seconds to multiple days. Please use the Azure portal to monitor the migration progress.
+* The throughput of your container, or database account, also influences the total initial sync time. Although RU/s are not used in this migration, the total RU/s available influences the performance of the process. You can temporarily increase your environment's available RUs to speed up the process.
+* You won't be able to query analytical store of an existing container while Synapse Link is being enabled on that container. Your OLTP workload isn't impacted and you can keep on reading data normally. Data ingested after the start of the initial sync will be merged into analytical store by the regular analytical store auto-sync process.
+
+> [!NOTE]
+> Currently you can't enable Synapse Link for MongoDB API containers.
-You can turn on analytical store when creating an Azure Cosmos DB container by using one of the following options.
### Azure portal
You can turn on analytical store when creating an Azure Cosmos DB container by u
1. After the container is created, verify that analytical store has been enabled by clicking **Settings**, right below Documents in Data Explorer, and check if the **Analytical Store Time to Live** option is turned on.
-### Azure Cosmos DB SDKs
+> [!NOTE]
+> You can also enable Synapse Link for your account using the **Power BI** and the **Synapse Link** pane, in the **Integrations** section of the left navigation menu.
+
+### Command-Line Tools
+
+#### Azure CLI
+
+The following options enable Synapse Link in a container by using Azure CLI by setting the `--analytical-storage-ttl` property.
+
+* [Create an Azure Cosmos DB MongoDB collection](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create-examples)
+* [Create or update an Azure Cosmos DB SQL API container](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create)
+
+#### PowerShell
+
+The following options enable Synapse Link in a container by using Azure CLI by setting the `-AnalyticalStorageTtl` property.
+
+* [Create an Azure Cosmos DB MongoDB collection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection#description)
+* [Create or update an Azure Cosmos DB SQL API container](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer)
+
-Set the `analytical TTL` property to the required value to create an analytical store enabled container. For the list of allowed values, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
+### Azure Cosmos DB SDKs - SQL API only
#### .NET SDK
-The following code creates a container with analytical store by using the .NET SDK. Set the `AnalyticalStoreTimeToLiveInSeconds` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
+The following .NET code creates a Synapse Link enabled container by setting the `AnalyticalStoreTimeToLiveInSeconds` property. To update an existing container, use the `Container.ReplaceContainerAsync` method.
```csharp // Create a container with a partition key, and analytical TTL configured to -1 (infinite retention)
await cosmosClient.GetDatabase("myDatabase").CreateContainerAsync(properties);
#### Java V4 SDK
-The following code creates a container with analytical store by using the Java V4 SDK. Set the `AnalyticalStoreTimeToLiveInSeconds` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
-
+The following Java code creates a Synapse Link enabled container by setting the `setAnalyticalStoreTimeToLiveInSeconds` property. To update an existing container, use the `container.replace` class.
```java // Create a container with a partition key and analytical TTL configured to -1 (infinite retention)
container = database.createContainerIfNotExists(containerProperties, 400).block(
#### Python V4 SDK
-The following code creates a container with analytical store by using the Python V4 SDK. Set the `analytical_storage_ttl` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
+The following Java code creates a Synapse Link enabled container by setting the `analytical_storage_ttl` property. To update an existing container, use the `replace_container` method.
```python
-# Azure Cosmos DB Python SDK, for SQL API only.
-# Creating an analytical store enabled container.
-
-import azure.cosmos as cosmos
-import azure.cosmos.cosmos_client as cosmos_client
-import azure.cosmos.exceptions as exceptions
-from azure.cosmos.partition_key import PartitionKey
-
-HOST = 'your-cosmos-db-account-URI'
-KEY = 'your-cosmos-db-account-key'
-DATABASE = 'your-cosmos-db-database-name'
-CONTAINER = 'your-cosmos-db-container-name'
- # Client client = cosmos_client.CosmosClient(HOST, KEY )
try:
except exceptions.CosmosResourceExistsError: print('A container with already exists') ```
-### Command-Line Tools
-
-Set the `analytical TTL` property to the required value to create an analytical store enabled container. For the list of allowed values, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
-
-#### Azure CLI
-
-The following options create a container with analytical store by using Azure CLI. Set the `--analytical-storage-ttl` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
-
-* [Create an Azure Cosmos DB MongoDB collection](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-create-examples)
-* [Create an Azure Cosmos DB SQL API container](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create)
-
-#### PowerShell
-
-The following options create a container with analytical store by using PowerShell. Set the `-AnalyticalStorageTtl` property to the required value in seconds or use `-1` for infinite retention. This setting can be changed later.
-
-* [Create an Azure Cosmos DB MongoDB collection](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection#description)
-* [Create an Azure Cosmos DB SQL API container](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer)
--
-## <a id="update-analytical-ttl"></a> Enable analytical store in an existing container
-
-> [!NOTE]
-> You can turn on analytical store on existing Azure Cosmos DB SQL API containers. This capability is general available and can be used for production workloads.
-
-Please note the following details when enabling Azure Synapse Link on your existing containers:
-
-* The same performance isolation of the analytical store auto-sync process applies to the initial sync and there is no performance impact on your OLTP workload.
-
-* A container's initial sync with analytical store total time will vary depending on the data volume and on the documents complexity. This process can take anywhere from a few seconds to multiple days. Please use the Azure portal to monitor the migration progress.
-
-* The throughput of your container, or database account, also influences the total initial sync time. Although RU/s are not used in this migration, the total RU/s available influences the performance of the process. You can temporarily increase your environment's available RUs to speed up the process.
-
-* You won't be able to query analytical store of an existing container while Synapse Link is being enabled on that container. Your OLTP workload isn't impacted and you can keep on reading data normally. Data ingested after the start of the initial sync will be merged into analytical store by the regular analytical store auto-sync process.
-
-* Currently existing MongoDB API collections are not supported. The alternative is to migrate the data into a new collection, created with analytical store turned on.
-
-### Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to your Azure Cosmos DB account and open the **Azure Synapse Link"** tab in the **Integrations** left navigation section. In this tab you can enable Synapse Link in your database account and you can enable Synapse Link on your existing containers.
-4. After you click the blue **Enable Synapse Link on your container(s)** button, you will start to see the progress of your containers initial sync progress.
-5. Optionally, you can go to the **Power BI** tab, also in the **Integrations** section, to create Power BI dashboards on your Synapse Link enabled containers.
--
-### Command-Line Tools
-
-Set the `analytical TTL` property to `-1` for infinite retention or use a positive integer to specify the number of seconds that the data will be retain in analytical store. For more information, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
--
-### Azure CLI
-
-* Use [az cosmosdb sql container update](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-update) to update `--analytical-storage-ttl`.
-* Check the migration status in the Azure portal.
-
-### PowerShell
-
-* Use [Update Analytical ttl](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer) to update `-AnalyticalStorageTtl`.
-* Check the migration status in the Azure portal.
-
-## <a id="update-analytical-ttl"></a> Optional - Update the analytical store time to live
-
-After the analytical store is enabled with a particular TTL value, you may want to update it to a different valid value. You can update the value by using the Azure portal, Azure CLI, PowerShell, or Cosmos DB SDKs. For information on the various Analytical TTL config options, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article.
-
-### Azure portal
-
-If you created an analytical store enabled container through the Azure portal, it contains a default `analytical TTL` of `-1`. Use the following steps to update this value:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/).
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
-1. Select an existing container that has analytical store enabled. Expand it and modify the following values:
- 1. Open the **Scale & Settings** window.
- 1. Under **Setting** find, **Analytical Storage Time to Live**.
- 1. Select **On (no default)** or select **On** and set a TTL value.
- 1. Click **Save** to save the changes.
--
-### .NET SDK
-
-The following code shows how to update the TTL for analytical store by using the .NET SDK:
-
-```csharp
-// Get the container, update AnalyticalStorageTimeToLiveInSeconds
-ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync();
-// Update analytical store TTL
-containerResponse.Resource. AnalyticalStorageTimeToLiveInSeconds = 60 * 60 * 24 * 180 // Expire analytical store data in 6 months;
-await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource);
-```
-
-### Java V4 SDK
-
-The following code shows how to update the TTL for analytical store by using the Java V4 SDK:
-
-```java
-CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey");
-
-// Update analytical store TTL to expire analytical store data in 6 months;
-containerProperties.setAnalyticalStoreTimeToLiveInSeconds (60 * 60 * 24 * 180 );
-
-// Update container settings
-container.replace(containerProperties).block();
-```
-
-### Python V4 SDK
-
-Currently not supported.
--
-### Azure CLI
-The following links show how to update containers analytical TTL by using Azure CLI:
+## Optional - Disable analytical store in a SQL API container
-* [Azure Cosmos DB API for Mongo DB](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-update)
-* [Azure Cosmos DB SQL API](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-update)
-
-### PowerShell
-
-The following links show how to update containers analytical TTL by using PowerShell:
-
-* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection)
-* [Azure Cosmos DB SQL API](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer)
-
-## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a SQL API container
-
-Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell.
+Analytical store can be disabled in SQL API containers using Azure CLI or PowerShell, by setting `analytical TTL` to `0`.
> [!NOTE] > Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
Analytical store can be disabled in SQL API containers using Azure CLI or PowerS
> Please note that disabling analitical store is not available for MongoDB API collections.
-### Azure CLI
-
-Set `--analytical-storage-ttl` parameter to 0 using the `az cosmosdb sql container update` Azure CLI command.
-
-### PowerShell
-
-Set `-AnalyticalStorageTtl` paramenter to 0 using the `Update-AzCosmosDBSqlContainer` PowerShell command.
-- ## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace Use the instructions in [Connect to Azure Synapse Link](../synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md) on how to access an Azure Cosmos DB database from Azure Synapse Analytics Studio with Azure Synapse Link.
-## <a id="query-analytical-store-spark"></a> Query analytical store using Apache Spark for Azure Synapse Analytics
+## <a id="query"></a> Query analytical store using Azure Synapse Analytics
+
+### Query analytical store using Apache Spark for Azure Synapse Analytics
Use the instructions in the [Query Azure Cosmos DB analytical store using Spark 3](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md) article on how to query with Synapse Spark 3. That article gives some examples on how you can interact with the analytical store from Synapse gestures. Those gestures are visible when you right-click on a container. With gestures, you can quickly generate code and tweak it to your needs. They are also perfect for discovering data with a single click. For Spark 2 integration use the instruction in the [Query Azure Cosmos DB analytical store using Spark 2](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md) article.
-## <a id="query-analytical-store-sql-on-demand"></a> Query the analytical store using serverless SQL pool in Azure Synapse Analytics
+### Query the analytical store using serverless SQL pool in Azure Synapse Analytics
Serverless SQL pool allows you to query and analyze data in your Azure Cosmos DB containers that are enabled with Azure Synapse Link. You can analyze data in near real-time without impacting the performance of your transactional workloads. It offers a familiar T-SQL syntax to query data from the analytical store and integrated connectivity to a wide range of BI and ad-hoc querying tools via the T-SQL interface. To learn more, see the [Query analytical store using serverless SQL pool](../synapse-analytics/sql/query-cosmos-db-analytical-store.md) article.
You can use integrated BI experience in Azure Cosmos DB portal, to build BI dash
If you want to use advance T-SQL views with joins across your containers or build BI dashboards in import](/power-bi/connect-dat) article.
-## Configure custom partitioning
+## <a id="best"></a> Improve Performance with Best Practices
+
+### Custom Partitioning
Custom partitioning enables you to partition analytical store data on fields that are commonly used as filters in analytical queries resulting in improved query performance. To learn more, see the [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md) articles.
-## Azure Resource Manager template
+### Synapse SQL Serverless best practices for Azure Synapse Link for Cosmos DB
+
+Use [this](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/best-practices-for-integrating-serverless-sql-pool-with-cosmos/ba-p/3257975) mandatory best practices for your SQL serverless queries.
-The [Azure Resource Manager template](./manage-with-templates.md#azure-cosmos-account-with-analytical-store) creates a Synapse Link enabled Azure Cosmos DB account for SQL API. This template creates a Core (SQL) API account in one region with a container configured with analytical TTL enabled, and an option to use manual or autoscale throughput. To deploy this template, click on **Deploy to Azure** on the readme page.
## <a id="cosmosdb-synapse-link-samples"></a> Getting started with Azure Synapse Link - Samples
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
The API for MongoDB now offers a built-in role-based access control (RBAC) that
[Learn more](./how-to-setup-rbac.md)
-### Unique partial indexes in Azure Cosmos DB API for MongoDB
-The unique partial indexes feature allows you more flexibility to specify exactly which fields in which documents youΓÇÖd like to index, all while enforcing uniqueness of that fieldΓÇÖs value. Resulting in the unique constraint being applied only to the documents that meet the specified filter expression.
-
-[Learn more](./feature-support-42.md)
--
-### Azure Cosmos DB API for MongoDB unique index reindexing (Preview)
-The unique index feature for Azure Cosmos DB allows you to create unique indexes when your collection was empty and didn't contain documents. This feature provides you with more flexibility by giving you the ability to create unique indexes whenever you want toΓÇömeaning thereΓÇÖs no need to plan unique indexes ahead of time before inserting any data into the collection.
-
-[Learn more](./mongodb-indexing.md) and enable the feature today by [submitting a support ticket request](https://azure.microsoft.com/support/create-ticket/)
-- ### Azure Cosmos DB API for MongoDB supports version 4.2 The Azure Cosmos DB API for MongoDB version 4.2 includes new aggregation functionality and improved security features such as client-side field encryption. These features help you accelerate development by applying the new functionality instead of developing it yourself.
cosmos-db Synapse Link Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link-power-bi.md
Make sure to create the following resources before you start:
* Enable Azure Synapse Link for your [Azure Cosmos account](configure-synapse-link.md#enable-synapse-link)
-* Create a database within the Azure Cosmos account and two containers that have [analytical store enabled.](configure-synapse-link.md#create-analytical-ttl)
+* Create a database within the Azure Cosmos account and two containers that have [analytical store enabled.](configure-synapse-link.md)
* Load products data into the Azure Cosmos containers as described in this [batch data ingestion](https://github.com/Azure-Samples/Synapse/blob/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/spark-notebooks/pyspark/1CosmoDBSynapseSparkBatchIngestion.ipynb) notebook.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md
+
+ Title: Programmatically create MCA subscriptions across Azure Active Directory tenants
+description: Learn how to programmatically create an Azure MCA subscription across Azure Active Directory tenants.
++++ Last updated : 08/22/2022++++
+# Programmatically create MCA subscriptions across Azure Active Directory tenants
+
+This article helps you programmatically create a Microsoft Customer Agreement (MCA) subscription across Azure Active Directory tenants. In some situations, you might need to create MCA subscriptions across Azure Active Directory (Azure AD) tenants but have them tied to a single billing account. Examples of such situations include SaaS providers wanting to segregate hosted customer services from internal IT services or internal environments that have strict regulatory compliance requirements, like Payment Card Industry (PCI).
+
+The process to create an MCA subscription across tenants is effectively a two-phase process. It requires actions to be taken in the source and destination Azure AD tenants. This article uses the following terminology:
+
+- Source Azure AD (source.onmicrosoft.com). It represents the source tenant where the MCA billing account exists.
+- Destination Cloud Azure AD (destination.onmicrosoft.com). It represents the destination tenant where the new MCA subscriptions are created.
+
+## Prerequisites
+
+You must you already have the following tenants created:
+
+- A source Azure AD tenant with an active [Microsoft Customer Agreement](create-subscription.md) billing account. If you don't have an active MCA, you can create one. For more information, see [Azure - Sign up](https://signup.azure.com/)
+- A destination Azure AD tenant separate from the tenant where your MCA belongs. To create a new Azure AD tenant, see [Azure AD tenant setup](../../active-directory/develop/quickstart-create-new-tenant.md).
+
+## Application set-up
+
+Use the information in the following sections to set up and configure the needed applications in the source and destination tenants.
+
+### Register an application in the source tenant
+
+To programmatically create an MCA subscription, an Azure AD application must be registered and granted the appropriate Azure RBAC permission. For this step, ensure you're signed into the source tenant (source.onmicrosoft.com) with an account that has permissions to register Azure AD applications.
+
+Following the steps in [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+
+For the purposes of this process, you only need to follow the [Register an application](../../active-directory/develop/quickstart-register-app.md#register-an-application) and [Add credentials](../../active-directory/develop/quickstart-register-app.md#add-credentials) sections.
+
+Save the following information to test and configure your environment:
+
+- Directory (tenant) ID
+- Application (client) ID
+- Object ID
+- App secret value that was generated. The value is only visible at the time of creation.
+
+### Create a billing role assignment for the application in the source tenant
+
+Review the information at [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md) to determine the appropriate scope and [billing role](understand-mca-roles.md#subscription-billing-roles-and-tasks) for the application.
+
+After you determine the scope and role, use the information at [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal) to create the role assignment for the application. Search for the application by using the name that you used when you registered the application in the preceding section.
+
+### Register an application in the destination tenant
+
+To accept the MCA subscription from the destination tenant (destination.onmicrosoft.com), an Azure AD application must be registered and added to the Billing administrator Azure AD role. For this step, ensure you're signed in to the destination tenant (destination.onmicrosoft.com) with an account that has permissions to register Azure AD applications. It must also have billing administrator role permission.
+
+Follow the same steps used above to register an application in the source tenant. Save the following information to test and configure your environment:
+
+- Directory (tenant) ID
+- Application (client) ID
+- Object ID
+- App secret value that was generated. The value is only visible at the time of creation.
+
+### Add the destination application to the Billing administrator Azure AD role
+
+Use the information at [Assign administrator and non-administrator roles to users with Azure AD](../../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) to add the destination application created in the preceding section to the Billing administrator Azure AD role in the destination tenant.
+
+## Programmatically create a subscription
+
+With the applications and permissions already set up, use the following information to programmatically create subscriptions.
+
+### Get the ID of the destination application service principal
+
+When you create an MCA subscription in the source tenant, you must specify the service principal or SPN of the application in the destination tenant as the owner. Use one of the following methods to get the ID. In both methods, the value to use for the empty GUID is the application (client) ID of the destination tenant application created previously.
+
+#### Azure CLI
+
+Sign in to Azure CLI and use the [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) command:
+
+```sh
+az ad sp show --id 00000000-0000-0000-0000-000000000000 --query 'id'
+```
+
+#### Azure PowerShell
+
+Sign in to Azure PowerShell and use the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) cmdlet:
+
+```sh
+Get-AzADServicePrincipal -ApplicationId 00000000-0000-0000-0000-000000000000 | Select-Object -Property Id
+```
+
+Save the `Id` value returned by the command.
+
+### Create the subscription
+
+Use the following information to create a subscription in the source tenant.
+
+#### Get a source application access token
+
+Replace the `{{placeholders}}` with the actual tenant ID, application (client) ID, and the app secret values that you saved when you created the source tenant application previously.
+
+Invoke the request and save the `access_token` value from the response for use in the next step.
+
+```http
+POST https://login.microsoftonline.com/{{tenant_id}}/oauth2/token
+Content-Type: application/x-www-form-urlencoded
+
+grant_type=client_credentials&client_id={{client_id}}&client_secret={{app_secret}}&resource=https%3A%2F%2Fmanagement.azure.com%2F
+```
+
+#### Get the billing account, profile, and invoice section IDs
+
+Use the information at [Find billing accounts that you have access to](programmatically-create-subscription-microsoft-customer-agreement.md?tabs=rest#find-billing-accounts-that-you-have-access-to) and [Find billing profiles & invoice sections to create subscriptions](programmatically-create-subscription-microsoft-customer-agreement.md?tabs=rest#find-billing-profiles--invoice-sections-to-create-subscriptions) sections to get the billing account, profile, and invoice section IDs.
+
+> [!NOTE]
+> We recommend using the REST method with the access token obtained previously to verify that the application billing role assignment was created successfully in the [Application Setup](#application-set-up) section.
+
+#### Create a subscription alias
+
+With the billing account, profile, and invoice section IDs, you have all the information needed to create the subscription:
+
+- `{{guid}}`: Can be a valid GUID.
+- `{{access_token}}`: Access token of the source tenant application obtained previously.
+- `{{billing_account}}`: ID of the billing account obtained previously.
+- `{{billing_profile}}`: ID of the billing profile obtained previously.
+- `{{invoice_section}}`: ID of the invoice section obtained previously.
+- `{{destination_tenant_id}}`: ID of the destination tenant as noted when you previously created the destination tenant application.
+- `{{destination_service_principal_id}}`: ID of the destination tenant service principal that you got from the [Get the ID of the destination application service principal](#get-the-id-of-the-destination-application-service-principal) section previously.
+
+Send the request and note the value of the `Location` header in the response.
+
+```http
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01
+Authorization: Bearer {{access_token}}
+Content-Type: application/json
+
+{
+ "properties": {
+ "displayName": "{{subscription_name}}",
+ "workload": "Production",
+ "billingScope": "/billingAccounts/{{billing_account}}/billingProfiles/{{billing_profile}}/invoiceSections/{{invoice_section}}",
+ "subscriptionId": null,
+ "additionalProperties": {
+ "managementGroupId": null,
+ "subscriptionTenantId": "{{destination_tenant_id}}",
+ "subscriptionOwnerId": "{{destination_service_principal_id}}"
+ }
+ }
+}
+```
+
+### Accept subscription ownership
+
+The last phase to complete the process is to accept subscription ownership.
+
+#### Get a destination application access token
+
+Replace `{{placeholders}}` with the actual tenant ID, application (client) ID, and app secret values that you saved when you created the destination tenant application previously.
+
+Invoke the request and save the `access_token` value from the response for the next step.
+
+```http
+POST https://login.microsoftonline.com/{{tenant_id}}/oauth2/token
+Content-Type: application/x-www-form-urlencoded
+
+grant_type=client_credentials&client_id={{client_id}}&client_secret={{app_secret}}&resource=https%3A%2F%2Fmanagement.azure.com%2F
+```
+
+#### Accept ownership
+
+Use the following information to accept ownership of the subscription in the destination tenant:
+
+- `{{subscription_id}}`: ID of the subscription created in the [Create subscription alias](#create-a-subscription-alias) section. It's contained in the location header that you noted.
+- `{{access_token}}`: Access token created in the previous step.
+- `{{subscription_display_name}}`: Display name for the subscription in your Azure environment.
+
+```http
+POST https://management.azure.com/providers/Microsoft.Subscription/subscriptions/{{subscription_id}}/acceptOwnership?api-version=2021-10-01
+Authorization: Bearer {{access_token}}
+Content-Type: application/json
+
+{
+ "properties": {
+ "displayName": "{{subscription_display_name}}",
+ "managementGroupId": null
+ }
+}
+```
+
+## Next steps
+
+* Now that you've created a subscription, you can grant that ability to other users and service principals. For more information, see [Grant access to create Azure Enterprise subscriptions (preview)](grant-access-to-create-subscription.md).
+* For more information about managing large numbers of subscriptions using management groups, see [Organize your resources with Azure management groups](../../governance/management-groups/overview.md).
+* To change the management group for a subscription, see [Move subscriptions](../../governance/management-groups/manage.md#move-subscriptions).
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Previously updated : 09/01/2021 Last updated : 08/22/2022
This article helps you programmatically create Azure subscriptions for a Microso
In this article, you learn how to create subscriptions programmatically using Azure Resource Manager.
+If you need to create an Azure MCA subscription across Azure Active Directory tenants, see [Programmatically create MCA subscriptions across Azure Active Directory tenants](programmatically-create-subscription-microsoft-customer-agreement-across-tenants.md).
+ When you create an Azure subscription programmatically, that subscription is governed by the agreement under which you obtained Azure services from Microsoft or an authorized reseller. For more information, see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/). [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
The following table describes names and descriptions of properties that are spec
> [!NOTE] > If you are passing linked services as referenceObjects in Custom Activity, it is a good security practice to pass an Azure Key Vault enabled linked service (since it does not contain any secure strings) and fetch the credentials using secret name directly from Key Vault from the code. You can find an example [here](https://github.com/nabhishek/customactivity_sample/tree/linkedservice) that references AKV enabled linked service, retrieves the credentials from Key Vault, and then accesses the storage in the code.
+> [!NOTE]
+> Currently only Azure Blob storage is supported for resourceLinkedService in custom activity, and it is the only linked service that gets created by default and no option to choose other connectors like ADLS Gen2.
+ ## Custom activity permissions The custom activity sets the Azure Batch auto-user account to *Non-admin access with task scope* (the default auto-user specification). You can't change the permission level of the auto-user account. For more info, see [Run tasks under user accounts in Batch | Auto-user accounts](../batch/batch-user-accounts.md#auto-user-accounts).
data-factory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md
+
+ Title: What's new archive
+description: This page archives older months' highlights of new features and recent improvements for Azure Data Factory. Data Factory is a managed cloud service that's built for complex hybrid extract-transform-and-load (ETL), extract-load-and-transform (ELT), and data integration projects.
++++++ Last updated : 09/27/2022+
+# What's new archive for Azure Data Factory
+
+Azure Data Factory is improved on an ongoing basis. To stay up to date with the most recent developments, refer to the current [What's New](whats-new.md) page, which provides you with information about:
+
+- The latest releases.
+- Known issues.
+- Bug fixes.
+- Deprecated functionality.
+- Plans for changes.
+
+This archive page retains updates from older months.
+
+Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update
+
+## June 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td rowspan=3><b>Data flow</b></td><td>Fuzzy join supported for data flows</td><td>Fuzzy join is now supported in Join transformation of data flows with configurable similarity score on join conditions.<br><a href="data-flow-join.md#fuzzy-join">Learn more</a></td></tr>
+<tr><td>Editing capabilities in source projection</td><td>Editing capabilities in source projection is available in Dataflow to make schemas modifications easily<br><a href="data-flow-source.md#source-options">Learn more</a></td></tr>
+<tr><td>Assert error handling</td><td>Assert error handling are now supported in data flows for data quality and data validation<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+<tr><td rowspan=2><b>Data Movement</b></td><td>Parameterization natively supported in additional 4 connectors</td><td>We added native UI support of parameterization for the following linked
+<tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector (Public Preview)</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-the-public-preview-of-the-sap-cdc-solution-in-azure/ba-p/3420904">Learn more</a></td></tr>
+<tr><td><b>Integration Runtime</b></td><td>Time-To-Live in managed VNET (Public Preview)</td><td>Time-To-Live can be set to the provisioned computes in managed VNET.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879">Learn more</a></td></tr>
+<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
+<tr><td><b>Orchestration</b></td><td>ΓÇÿturnOffAsync' property is available in Web activity</td><td>Web activity supports an async request-reply pattern that invokes HTTP GET on the Location field in the response header of an HTTP 202 Response. It helps web activity automatically poll the monitoring end-point till the job runs. ΓÇÿturnOffAsync' property is supported to disable this behavior in cases where polling isn't needed<br><a href="control-flow-web-activity.md#type-properties">Learn more</a></td></tr>
+</table>
++
+### Video summary
+
+> [!VIDEO https://www.youtube.com/embed?v=Ay3tsJe_vMM&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=3]
+
+## May 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td><b>Data flow</b></td><td>User Defined Functions for mapping data flows</td><td>Azure Data Factory introduces in public preview user defined functions and data flow libraries. A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.<br><a href="concepts-data-flow-udf.md">Learn more</a></td></tr>
+
+</table>
+
+## April 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Data preview and debug improvements in mapping data flows</td><td>Debug sessions using the AutoResolve Azure integration runtime (IR) will now start up in under 10 seconds. There are new updates to the data preview panel in mapping data flows. Now you can sort the rows inside the data preview view by selecting column headers. You can move columns around interactively. You can also save the data preview results as a CSV by using Export CSV.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+<tr><td>Dataverse connector is available for mapping data flows</td><td>Dataverse connector is available as source and sink for mapping data flows.<br><a href="connector-dynamics-crm-office-365.md">Learn more</a></td></tr>
+<tr><td>Support for user database schemas for staging with the Azure Synapse Analytics and PostgreSQL connectors in data flow sink</td><td>Data flow sink now supports using a user database schema for staging in both the Azure Synapse Analytics and PostgreSQL connectors.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-flow-sink-supports-user-db-schema-for-staging-in-azure/ba-p/3299210">Learn more</a></td></tr>
+
+<tr><td><b>Monitoring</b></td><td>Multiple updates to Data Factory monitoring experiences</td><td>New updates to the monitoring experience in Data Factory include the ability to export results to a CSV, clear all filters, and open a run in a new tab. Column and result caching is also improved.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
+
+<tr><td><b>User interface</b></td><td>New regional format support</td><td>Choosing your language and the regional format in settings will influence the format of how data such as dates and times appear in the Azure Data Factory Studio monitoring. For example, the time format in Monitoring will appear like "Apr 2, 2022, 3:40:29 pm" when choosing English as the regional format, and "2 Apr 2022, 15:40:29" when choosing French as regional format. These settings affect only the Azure Data Factory Studio user interface and do not change/ modify your actual data and time zone.</td></tr>
+
+</table>
+
+## March 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=5><b>Data flow</b></td><td>ScriptLines and parameterized linked service support added mapping data flows</td><td>It's now easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized linked services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
+<tr><td>Flowlets general availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable extract-transform-and-load (ETL) jobs to be composed of custom or common logic components.<br><a href="concepts-data-flow-flowlet.md">Learn more</a></td></tr>
+
+<tr><td>Change Feed connectors are available in five data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Azure Cosmos DB, Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, and the common data model (CDM).<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
+<tr><td>Data preview and debug improvements in mapping data flows</td><td>New features were added to data preview and the debug experience in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+<tr><td>SFTP connector for mapping data flow</td><td>SFTP connector is available for mapping data flow as both source and sink.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td><b>Data movement</b></td><td>Support Always Encrypted for SQL-related connectors in Lookup activity under Managed virtual network</td><td>Always Encrypted is supported for SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Synapse Analytics in the Lookup activity under managed virtual network.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>New UI layout in Azure IR creation and edit page</td><td>The UI layout of the IR creation and edit page now uses tab style for Settings, Virtual network, and Data flow runtime.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-ui-layout-in-azure-integration-runtime-creation-and-edit/ba-p/3248237">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Orchestration</b></td><td>Transform data by using the Script activity</td><td>You can use a Script activity to invoke a SQL script in SQL Database, Azure Synapse Analytics, SQL Server, Oracle, or Snowflake.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/execute-sql-statements-using-the-new-script-activity-in-azure/ba-p/3239969">Learn more</a></td></tr>
+<tr><td>Web activity timeout improvement</td><td>You can configure response timeout in a Web activity to prevent it from timing out if the response period is more than one minute, especially in the case of synchronous APIs.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307">Learn more</a></td></tr>
+
+</table>
+
+### Video summary
+
+> [!VIDEO https://www.youtube.com/embed?v=MkgBxFyYwhQ&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=2]
+
+## February 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=4><b>Data flow</b></td><td>Parameterized linked services supported in mapping data flows</td><td>You can now use your parameterized linked services in mapping data flows to make your data flow pipelines generic and flexible.<br><a href="parameterize-linked-services.md?tabs=data-factory">Learn more</a></td></tr>
+<tr><td>SQL Database incremental source extract available in data flow (public preview)</td><td>A new option has been added on mapping data flow SQL Database sources called <i>Enable incremental extract (preview)</i>. Now you can automatically pull only the rows that have changed on your SQL Database sources by using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+<tr><td>Four new connectors available for mapping data flows (public preview)</td><td>Data Factory now supports four new connectors (public preview) for mapping data flows: Quickbase connector, Smartsheet connector, TeamDesk connector, and Zendesk connector.<br><a href="connector-overview.md?tabs=data-factory">Learn more</a></td></tr>
+<tr><td>Azure Cosmos DB (SQL API) for mapping data flow now supports inline mode</td><td>Azure Cosmos DB (SQL API) for mapping data flow can now use inline datasets.<br><a href="connector-azure-cosmos-db.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr>
+<tr><td>Data Factory Google AdWords connector API upgrade available</td><td>The Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user because it's enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
+
+<tr><td><b>Continuous integration and continuous delivery (CI/CD)</b></td><td>Cross tenant Azure DevOps support</td><td>Configure a repository using Azure DevOps Git not in the same tenant as the Azure Data Factory。<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+
+<tr><td><b>Region expansion</b></td><td>Data Factory is now available in West US3 and Jio India West</td><td>Data Factory is now available in two new regions: West US3 and Jio India West. You can colocate your ETL workflow in these new regions if you're using these regions to store and manage your modern data warehouse. You can also use these regions for business continuity and disaster recovery purposes if you need to fail over from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
+
+<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory (Azure AD) tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+</table>
+
+### Video summary
+
+> [!VIDEO https://www.youtube.com/embed?v=r22nthp-f4g&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=1]
+
+## January 2022
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=5><b>Data flow</b></td><td>Quick reuse is now automatic in all Azure IRs that use Time to Live (TTL)</td><td>You no longer need to manually specify "quick reuse." Data Factory mapping data flows can now start up subsequent data flow activities in under five seconds after you set a TTL.<br><a href="concepts-integration-runtime-performance.md#time-to-live">Learn more</a></td></tr>
+<tr><td>Retrieve your custom Assert description</td><td>In the Assert transformation, you can define your own dynamic description message. You can use the new function <b>assertErrorMessage()</b> to retrieve the row-by-row message and store it in your destination data.<br><a href="data-flow-expressions-usage.md#assertErrorMessages">Learn more</a></td></tr>
+<tr><td>Automatic schema detection in Parse transformation</td><td>A new feature added to the Parse transformation makes it easy to automatically detect the schema of an embedded complex field inside a string column. Select the <b>Detect schema</b> button to set your target schema automatically.<br><a href="data-flow-parse.md">Learn more</a></td></tr>
+<tr><td>Support Dynamics 365 connector as both sink and source</td><td>You can now connect directly to Dynamics 365 to transform your Dynamics data at scale by using the new mapping data flow connector for Dynamics 365.<br><a href="connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+<tr><td>Always Encrypted SQL connections now available in data flows</td><td>Always Encrypted can now source transformations in SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse when you use data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data movement</b></td><td>Data Factory Azure Databricks Delta Lake connector supports new authentication types</td><td>Data Factory Databricks Delta Lake connector now supports two more authentication types: system-assigned managed identity authentication and user-assigned managed identity authentication.<br><a href="connector-azure-databricks-delta-lake.md">Learn more</a></td></tr>
+<tr><td>Data Factory Copy activity supports upsert in several more connectors</td><td>Data Factory Copy activity now supports upsert while it sinks data to SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse.<br><a href="connector-overview.md">Learn more</a></td></tr>
+
+</table>
+
+## December 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=9><b>Data flow</b></td><td>Dynamics connector as native source and sink for mapping data flows</td><td>The Dynamics connector is now supported as source and sink for mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
+<tr><td>Native change data capture (CDC) is now natively supported</td><td>CDC is now natively supported in Data Factory for Azure Cosmos DB, Blob Storage, Data Lake Storage Gen1, Data Lake Storage Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/cosmosdb-change-feed-is-supported-in-adf-now/ba-p/3037011">Learn more</a></td></tr>
+<tr><td>Flowlets public preview</td><td>The flowlets public preview allows data flow developers to build reusable components to easily build composable data transformation logic.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-flowlets-preview-for-adf-and-synapse/ba-p/3030699">Learn more</a></td></tr>
+<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Azure Synapse lake databases.<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
+<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output destinations from Power Query in Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
+<tr><td>External Call transformation support</td><td>Extend the functionality of mapping data flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third-party service row by row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
+<tr><td>Enable quick reuse by Azure Synapse mapping data flows with TTL support</td><td>Azure Synapse mapping data flows now support quick reuse by setting a TTL in the Azure IR. Using a setting enables your subsequent data flow activities to execute in under five seconds.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+<tr><td>Assert transformation</td><td>Easily add data quality, data domain validation, and metadata checks to your Data Factory pipelines by using the Assert transformation in mapping data flows.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+<tr><td>IntelliSense support in expression builder for more productive pipeline authoring experiences</td><td>IntelliSense support in expression builder and dynamic content authoring makes Data Factory and Azure Synapse pipeline developers more productive while they write complex expressions in their data pipelines.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459">Learn more</a></td></tr>
+
+</table>
+
+## November 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr>
+ <td><b>Continuous integration and continuous delivery (CI/CD)</b></td>
+ <td>GitHub integration improvements</td>
+ <td>Improvements in Data Factory and GitHub integration remove limits on 1,000 Data Factory resources per resource type, such as datasets and pipelines. For large data factories, this change helps mitigate the impact of the GitHub API rate limit.<br><a href="source-control.md">Learn more</a></td>
+ </tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Set a custom error code and error message with the Fail activity</td><td>Fail activity enables ETL developers to set the error message and custom error code for a Data Factory pipeline.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
+<tr><td>External call transformation</td><td>Mapping data flows External Call transformation enables ETL developers to use transformations and data enrichments provided by REST endpoints or third-party API services.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
+<tr><td>Synapse quick reuse</td><td>When you execute data flow in Synapse Analytics, use the TTL feature. The TTL feature uses the quick reuse feature so that sequential data flows will execute within a few seconds. You can set the TTL when you configure an Azure IR.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data movement</b></td><td>Copy activity supports reading data from FTP or SFTP without chunking</td><td>Automatically determine the file length or the relevant offset to be read when you copy data from an FTP or SFTP server. With this capability, Data Factory automatically connects to the FTP or SFTP server to determine the file length. After the length is determined, Data Factory divides the file into multiple chunks and reads them in parallel.<br><a href="connector-ftp.md">Learn more</a></td></tr>
+<tr><td><i>UTF-8 without BOM</i> support in Copy activity</td><td>Copy activity supports writing data with encoding the type <i>UTF-8 without BOM</i> for JSON and delimited text datasets.</td></tr>
+<tr><td>Multicharacter column delimiter support</td><td>Copy activity supports using multicharacter column delimiters for delimited text datasets.</td></tr>
+
+<tr>
+ <td><b>Integration runtime</b></td>
+ <td>Run any process anywhere in three steps with SQL Server Integration Services (SSIS) in Data Factory</td>
+ <td>Learn how to use the best of Data Factory and SSIS capabilities in a pipeline. A sample SSIS package with parameterized properties helps you get a jump-start. With Data Factory Studio, the SSIS package can be easily dragged and dropped into a pipeline and used as part of an Execute SSIS Package activity.<br><br>This capability enables you to run the Data Factory pipeline with an SSIS package on self-hosted IRs or SSIS IRs. By providing run-time parameter values, you can use the powerful capabilities of Data Factory and SSIS capabilities together. This article illustrates three steps to run any process, which can be any executable, such as an application, program, utility, or batch file, anywhere.
+<br><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-process-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2962609">Learn more</a></td>
+ </tr>
+</table>
++
+## October 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Azure Data Explorer and Amazon Web Services (AWS) S3 connectors</td><td>The Microsoft Data Integration team has released two new connectors for mapping data flows. If you're using Azure Synapse, you can now connect directly to your AWS S3 buckets for data transformations. In both Data Factory and Azure Synapse, you can now natively connect to your Azure Data Explorer clusters in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
+<tr><td>Power Query activity leaves preview for GA</td><td>The Data Factory Power Query pipeline activity is now generally available. This new feature provides scaled-out data prep and data wrangling for citizen integrators inside the Data Factory browser UI for an integrated experience for data engineers. The Power Query data wrangling feature in Data Factory provides a powerful, easy-to-use pipeline capability to solve your most complex data integration and ETL patterns in a single service.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/data-wrangling-at-scale-with-adf-s-power-query-activity-now/ba-p/2824207">Learn more</a></td></tr>
+<tr><td>New Stringify data transformation in mapping data flows</td><td>Mapping data flows adds a new data transformation called Stringify to make it easy to convert complex data types like structs and arrays into string form. These data types then can be sent to structured output destinations.<br><a href="data-flow-stringify.md">Learn more</a></td></tr>
+
+<tr>
+ <td><b>Integration runtime</b></td>
+ <td>Express virtual network injection for SSIS IR (public preview)</td>
+ <td>The SSIS IR now supports express virtual network injection.<br>
+ Learn more:<br>
+ <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of virtual network injection for SSIS IR</a><br>
+ <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express virtual network injection for SSIS IR</a><br>
+ <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express virtual network injection for SSIS IR</a>
+ </td>
+</tr>
+
+<tr><td rowspan=2><b>Security</b></td><td>Azure Key Vault integration improvement</td><td>Key Vault integration now has dropdowns so that users can select the secret values in the linked service. This capability increases productivity because users aren't required to type in the secrets, which could result in human error.</td></tr>
+<tr><td>Support for user-assigned managed identity in Data Factory</td><td>Credential safety is crucial for any enterprise. The Data Factory team is committed to making the data engineering process secure yet simple for data engineers. User-assigned managed identity (preview) is now supported in all connectors and linked services that support Azure AD-based authentication.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-for-user-assigned-managed-identity-in-azure-data-factory/ba-p/2841013">Learn more</a></td></tr>
+</table>
+
+## September 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+ <tr><td><b>Continuous integration and continuous delivery</b></td><td>Expanded CI/CD capabilities</td><td>You can now create a new Git branch based on any other branch in Data Factory.<br><a href="source-control.md#version-control">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data movement</b></td><td>Amazon Relational Database Service (RDS) for Oracle sources</td><td>The Amazon RDS for Oracle sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-oracle.md">Learn more</a></td></tr>
+<tr><td>Amazon RDS for SQL Server sources</td><td>The Amazon RDS for the SQL Server sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-sql-server.md">Learn more</a></td></tr>
+<tr><td>Support parallel copy from Azure Database for PostgreSQL</td><td>The Azure Database for PostgreSQL connector now supports parallel copy operations.<br><a href="connector-azure-database-for-postgresql.md">Learn more</a></td></tr>
+
+<tr><td rowspan=3><b>Data flow</b></td><td>Use Data Lake Storage Gen2 to execute pre- and post-processing commands</td><td>Hadoop Distributed File System pre- and post-processing commands can now be executed by using Data Lake Storage Gen2 sinks in data flows.<br><a href="connector-azure-data-lake-storage.md#pre-processing-and-post-processing-commands">Learn more</a></td></tr>
+<tr><td>Edit data flow properties for existing instances of the Azure IR </td><td>The Azure IR has been updated to allow editing of data flow properties for existing IRs. You can now modify data flow compute properties without needing to create a new Azure IR.<br><a href="concepts-integration-runtime.md">Learn more</a></td></tr>
+<tr><td>TTL setting for Azure Synapse to improve pipeline activities execution startup time</td><td>Azure Synapse has added TTL to the Azure IR to enable your data flow pipeline activities to begin execution in seconds, which greatly minimizes the runtime of your data flow pipelines.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Data Factory managed virtual network GA</td><td>You can now provision the Azure IR as part of a managed virtual network and use private endpoints to securely connect to supported data stores. Data traffic goes through Azure Private Links, which provides secured connectivity to the data source. It also prevents data exfiltration to the public internet.<br><a href="managed-virtual-network-private-endpoint.md">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Orchestration</b></td><td>Operationalize and provide SLA for data pipelines</td><td>The new Elapsed Time Pipeline Run metric, combined with Data Factory alerts, empowers data pipeline developers to better deliver SLAs to their customers. Now you can tell us how long a pipeline should run, and we'll notify you proactively when the pipeline runs longer than expected.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/operationalize-and-provide-sla-for-data-pipelines/ba-p/2767768">Learn more</a></td></tr>
+<tr><td>Fail activity (public preview)</td><td>The new Fail activity allows you to throw an error in a pipeline intentionally for any reason. For example, you might use the Fail activity if a Lookup activity returns no matching data or a custom activity finishes with an internal error.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
+</table>
++
+## August 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+ <tr><td><b>Continuous integration and continuous delivery</b></td><td>CI/CD improvements with GitHub support in Azure Government and Azure China</td><td>We've added support for GitHub in Azure for US Government and Azure China.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data movement</b></td><td>The Azure Cosmos DB API for MongoDB connector supports versions 3.6 and 4.0 in Data Factory</td><td>The Data Factory Azure Cosmos DB API for MongoDB connector now supports server versions 3.6 and 4.0.<br><a href="connector-azure-cosmos-db-mongodb-api.md">Learn more</a></td></tr>
+<tr><td>Enhance using COPY statement to load data into Azure Synapse</td><td>The Data Factory Azure Synapse connector now supports staged copy and copy source with *.* as wildcardFilename for the COPY statement.<br><a href="connector-azure-sql-data-warehouse.md#use-copy-statement">Learn more</a></td></tr>
+
+<tr><td><b>Data flow</b></td><td>REST endpoints are available as source and sink in data flow</td><td>Data flows in Data Factory and Azure Synapse now support REST endpoints as both a source and sink with full support for both JSON and XML payloads.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/rest-source-and-sink-now-available-for-data-flows/ba-p/2596484">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Diagnostic tool is available for self-hosted IR</td><td>A diagnostic tool for self-hosted IR is designed to provide a better user experience and help users to find potential issues. The tool runs a series of test scenarios on the self-hosted IR machine. Every scenario has typical health check cases for common issues.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/diagnostic-tool-for-self-hosted-integration-runtime/ba-p/2634905">Learn more</a></td></tr>
+
+<tr><td><b>Orchestration</b></td><td>Custom event trigger with advanced filtering option GA</td><td>You can now create a trigger that responds to a custom topic posted to Azure Event Grid. You can also use advanced filtering to get fine-grain control over what events to respond to.<br><a href="how-to-create-custom-event-trigger.md">Learn more</a></td></tr>
+</table>
++
+## July 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (public preview)</td><td>Now you can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool (public preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr>
+
+<tr><td><b>Data flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions enables data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
+
+<tr><td><b>Integration runtime</b></td><td>Five new regions are available in Data Factory managed virtual network (public preview)</td><td>Five new regions, China East2, China North2, US Government Arizona, US Government Texas, and US Government Virginia, are available in the Data Factory managed virtual network (public preview).<br></td></tr>
+
+<tr><td rowspan=2><b>Developer productivity</b></td><td>Data Factory home page improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. A few sections are introduced on the home page to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
+<tr><td>New landing page for Data Factory Studio</td><td>The landing page for the Data Factory pane in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
+</table>
+
+## June 2021
+
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=4 valign="middle"><b>Data movement</b></td><td>New user experience with Data Factory Copy Data tool</td><td>The redesigned Copy Data tool is now available with improved data ingestion experience.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/a-re-designed-copy-data-tool-experience/ba-p/2380634">Learn more</a></td></tr>
+<tr><td>MongoDB and MongoDB Atlas are supported as both source and sink</td><td>This improvement supports copying data between any supported data store and MongoDB or MongoDB Atlas database.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-connectors-available-in-adf-mongodb-and-mongodb-atlas-are/ba-p/2441482">Learn more</a></td></tr>
+<tr><td>Always Encrypted is supported for SQL Database, SQL Managed Instance, and SQL Server connectors as both source and sink</td><td>Always Encrypted is available in Data Factory for SQL Database, SQL Managed Instance, and SQL Server connectors for the Copy activity.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/azure-data-factory-copy-now-supports-always-encrypted-for-both/ba-p/2461346">Learn more</a></td></tr>
+<tr><td>Setting custom metadata is supported in Copy activity when sinking to Data Lake Storage Gen2 or Blob Storage</td><td>When you write to Data Lake Storage Gen2 or Blob Storage, the Copy activity supports setting custom metadata or storage of the source file's last modified information as metadata.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-setting-custom-metadata-when-writing-to-blob-adls-gen2/ba-p/2545506#M490">Learn more</a></td></tr>
+
+<tr><td rowspan=4 valign="middle"><b>Data flow</b></td><td>SQL Server is now supported as a source and sink in data flows</td><td>SQL Server is now supported as a source and sink in data flows. Follow the link for instructions on how to configure your networking by using the Azure IR managed virtual network feature to talk to your SQL Server on-premises and cloud VM-based instances.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-data-flow-connector-sql-server-as-source-and-sink/ba-p/2406213">Learn more</a></td></tr>
+<tr><td>Dataflow Cluster quick reuse now enabled by default for all new Azure IRs</td><td>The popular data flow quick startup reuse feature is now generally available for Data Factory. All new Azure IRs now have quick reuse enabled by default.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/how-to-startup-your-data-flows-execution-in-less-than-5-seconds/ba-p/2267365">Learn more</a></td></tr>
+<tr><td>Power Query (public preview) activity</td><td>You can now build complex field mappings to your Power Query sink by using Data Factory data wrangling. The sink is now configured in the pipeline in the Power Query (public preview) activity to accommodate this update.<br><a href="wrangling-tutorial.md">Learn more</a></td></tr>
+<tr><td>Updated data flows monitoring UI in Data Factory</td><td>Data Factory has a new update for the monitoring UI to make it easier to view your data flow ETL job executions and quickly identify areas for performance tuning.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/updated-data-flows-monitoring-ui-in-adf-amp-synapse/ba-p/2432199">Learn more</a></td></tr>
+
+<tr><td><b>SQL Server Integration Services</b></td><td>Run any SQL statements or scripts anywhere in three steps with SSIS in Data Factory</td><td>This post provides three steps to run any SQL statements or scripts anywhere with SSIS in Data Factory.<ol><li>Prepare your self-hosted IR or SSIS IR.</li><li>Prepare an Execute SSIS Package activity in Data Factory pipeline.</li><li>Run the Execute SSIS Package activity on your self-hosted IR or SSIS IR.</li></ol><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244">Learn more</a></td></tr>
+</table>
+
+## More information
+
+- [What's New in Azure Data Factory - current months](whats-new.md)
+- [Blog - Azure Data Factory](https://techcommunity.microsoft.com/t5/azure-data-factory/bg-p/AzureDataFactoryBlog)
+- [Stack Overflow forum](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter](https://twitter.com/AzDataFactory?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor)
+- [Videos](https://www.youtube.com/channel/UC2S0k7NeLcEm5_IhHUwpN0g/featured)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Previously updated : 01/21/2022 Last updated : 09/27/2022 # What's new in Azure Data Factory
Azure Data Factory is improved on an ongoing basis. To stay up to date with the
- Deprecated functionality. - Plans for changes.
-This page is updated monthly, so revisit it regularly.
+This page is updated monthly, so revisit it regularly. For older months' updates, refer to the [What's new archive](whats-new-archive.md).
-## August 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=3><b>Data flow</b></td><td>Appfigures connector added as Source (Preview)</td><td>WeΓÇÖve added a new REST-based connector to mapping data flows! Users can now read their tables from Appfigures. Note: This connector is only available when using inline datasets.<br><a href="connector-appfigures.md">Learn more</a></td></tr>
-<tr><td>Cast transformation added ΓÇô visually convert data types </td><td>Now, you can use the cast transformation to quickly transform data types visually!<br><a href="data-flow-cast.md">Learn more</a></td></tr>
-<tr><td>New UI for inline datasets - categories added to easily find data sources </td><td>WeΓÇÖve updated our data flow source UI to make it easier to find your inline dataset type. Previously, you would have to scroll through the list or filter to find your inline dataset type. Now, we have categories that group your dataset types, making it easier to find what youΓÇÖre looking for.<br><a href="data-flow-source.md#inline-datasets">Learn more</a></td></tr>
-<tr><td rowspan=1><b>Data Movement</b></td><td>Service principal authentication type added for Blob storage</td><td>Service principal is added as a new additional authentication type based on existing authentication.<br><a href="connector-azure-blob-storage.md#service-principal-authentication">Learn more</a></td></tr>
-<tr><td rowspan=1><b>Continuous integration and continuous delivery (CI/CD)</b></td><td>Exclude turning off triggers that did not change in deployment</td><td>When CI/CD integrating ARM template, instead of turning off all triggers, it can exclude triggers that did not change in deployment.<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064>Learn more</a></td></tr>
-<tr><td rowspan=3><b>Developer productivity</b></td><td> Default activity time-out changed from 7 days to 12 hours </td><td>The previous default timeout for new pipeline activities is 7 days for most activities, which was too long and far outside of the most common activity execution times we observed and heard from you. So we change the default timeout of new activities to 12 hours. Keep in mind that you should adjust the timeout on long-running processes (i.e. large copy activity and data flow jobs) to a higher value if needed. <br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729>Learn more</a></td></tr>
-<tr><td>Expression builder UI update ΓÇô categorical tabs added for easier use </td><td>WeΓÇÖve updated our expression builder UI to make adding pipeline designing easier. WeΓÇÖve created new content category tabs to make it easier to find what youΓÇÖre looking for.<br><a href="data-flow-cast.md">Learn more</a></td></tr>
-<tr><td>New data factory creation experience - 1 click to have your factory ready within seconds </td><td>The new creation experience is available in adf.azure.com to successfully create your factory within several seconds.<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-experience-for-creating-data-factory-within-seconds/ba-p/3561249>Learn more</a></td></tr>
-</table>
-
-## July 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=5><b>Data flow</b></td><td>Asana connector added as source</td><td>WeΓÇÖve added a new REST-based connector to mapping data flows! Users can now read their tables from Asana. Note: This connector is only available when using inline datasets.<br><a href="connector-asana.md">Learn more</a></td></tr>
-<tr><td>3 new data transformation functions are supported</td><td>3 new data transformation functions have been added to mapping data flows in Azure Data Factory and Azure Synapse Analytics. Now, users are able to use collectUnique(), to create a new collection of unique values in an array, substringIndex(), to extract the substring before n occurrences of a delimiter, and topN(), to return the top n results after sorting your data.<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738>Learn more</a></td></tr>
-<tr><td>Refetch from source available in Refresh for data source change scenarios</td><td>When building and debugging a data flow, your source data can change. There is now a new easy way to refetch the latest updated source data from the refresh button in the data preview pane.<br><a href="concepts-data-flow-debug-mode.md#data-preview">Learn more</a></td></tr>
-<tr><td>User defined functions (GA) </td><td>Create reusable and customized expressions to avoid building complex logic over and over<br><a href="concepts-data-flow-udf.md">Learn more</a></td></tr>
-<tr><td>Easier configuration on data flow runtime ΓÇô choose compute size among Small, Medium, and Large to pre-configure all integration runtime settings</td><td>Azure Data Factory has made it easier for users to configure Azure Integration Runtime for mapping data flows by choosing compute size among Small, Medium, and Large to pre-configure all integration runtime settings. You can still set your own custom configurations.<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033>Learn more</a></td></tr>
-<tr><td rowspan=1><b>Continuous integration and continuous delivery (CI/CD)</b></td><td>Include Global parameters supported in ARM template</td><td>WeΓÇÖve added a new mechanism to include Global Parameters in the ARM templates. This helps to solve an earlier issue, which overrode some configurations during deployments when users included global parameters in ARM templates.<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvement-using-global-parameters-in-azure-data-factory/ba-p/3557265#M665>Learn more</a></td></tr>
-<tr><td><b>Developer productivity</b></td><td>Azure Data Factory studio preview experience</td><td>Be a part of Azure Data Factory Preview features to experience latest Azure Data Factory capabilities, and be the first to share your feedback<br><a href=https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880>Learn more</a></td></tr>
-</table>
-
-## June 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-<tr><td rowspan=3><b>Data flow</b></td><td>Fuzzy join supported for data flows</td><td>Fuzzy join is now supported in Join transformation of data flows with configurable similarity score on join conditions.<br><a href="data-flow-join.md#fuzzy-join">Learn more</a></td></tr>
-<tr><td>Editing capabilities in source projection</td><td>Editing capabilities in source projection is available in Dataflow to make schemas modifications easily<br><a href="data-flow-source.md#source-options">Learn more</a></td></tr>
-<tr><td>Assert error handling</td><td>Assert error handling are now supported in data flows for data quality and data validation<br><a href="data-flow-assert.md">Learn more</a></td></tr>
-<tr><td rowspan=2><b>Data Movement</b></td><td>Parameterization natively supported in additional 4 connectors</td><td>We added native UI support of parameterization for the following linked
-<tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector (Public Preview)</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-the-public-preview-of-the-sap-cdc-solution-in-azure/ba-p/3420904">Learn more</a></td></tr>
-<tr><td><b>Integration Runtime</b></td><td>Time-To-Live in managed VNET (Public Preview)</td><td>Time-To-Live can be set to the provisioned computes in managed VNET.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879">Learn more</a></td></tr>
-<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
-<tr><td><b>Orchestration</b></td><td>ΓÇÿturnOffAsync' property is available in Web activity</td><td>Web activity supports an async request-reply pattern that invokes HTTP GET on the Location field in the response header of an HTTP 202 Response. It helps web activity automatically poll the monitoring end-point till the job runs. ΓÇÿturnOffAsync' property is supported to disable this behavior in cases where polling isn't needed<br><a href="control-flow-web-activity.md#type-properties">Learn more</a></td></tr>
-</table>
-
-
-## May 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td><b>Data flow</b></td><td>User Defined Functions for mapping data flows</td><td>Azure Data Factory introduces in public preview user defined functions and data flow libraries. A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.<br><a href="concepts-data-flow-udf.md">Learn more</a></td></tr>
-
-</table>
-
-## April 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=3><b>Data flow</b></td><td>Data preview and debug improvements in mapping data flows</td><td>Debug sessions using the AutoResolve Azure integration runtime (IR) will now start up in under 10 seconds. There are new updates to the data preview panel in mapping data flows. Now you can sort the rows inside the data preview view by selecting column headers. You can move columns around interactively. You can also save the data preview results as a CSV by using Export CSV.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
-<tr><td>Dataverse connector is available for mapping data flows</td><td>Dataverse connector is available as source and sink for mapping data flows.<br><a href="connector-dynamics-crm-office-365.md">Learn more</a></td></tr>
-<tr><td>Support for user database schemas for staging with the Azure Synapse Analytics and PostgreSQL connectors in data flow sink</td><td>Data flow sink now supports using a user database schema for staging in both the Azure Synapse Analytics and PostgreSQL connectors.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-flow-sink-supports-user-db-schema-for-staging-in-azure/ba-p/3299210">Learn more</a></td></tr>
-
-<tr><td><b>Monitoring</b></td><td>Multiple updates to Data Factory monitoring experiences</td><td>New updates to the monitoring experience in Data Factory include the ability to export results to a CSV, clear all filters, and open a run in a new tab. Column and result caching is also improved.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
-
-<tr><td><b>User interface</b></td><td>New regional format support</td><td>Choosing your language and the regional format in settings will influence the format of how data such as dates and times appear in the Azure Data Factory Studio monitoring. For example, the time format in Monitoring will appear like "Apr 2, 2022, 3:40:29 pm" when choosing English as the regional format, and "2 Apr 2022, 15:40:29" when choosing French as regional format. These settings affect only the Azure Data Factory Studio user interface and do not change/ modify your actual data and time zone.</td></tr>
-
-</table>
-
-## March 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=5><b>Data flow</b></td><td>ScriptLines and parameterized linked service support added mapping data flows</td><td>It's now easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized linked services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
-<tr><td>Flowlets general availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable extract-transform-and-load (ETL) jobs to be composed of custom or common logic components.<br><a href="concepts-data-flow-flowlet.md">Learn more</a></td></tr>
-
-<tr><td>Change Feed connectors are available in five data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Azure Cosmos DB, Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, and the common data model (CDM).<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
-<tr><td>Data preview and debug improvements in mapping data flows</td><td>New features were added to data preview and the debug experience in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
-<tr><td>SFTP connector for mapping data flow</td><td>SFTP connector is available for mapping data flow as both source and sink.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td><b>Data movement</b></td><td>Support Always Encrypted for SQL-related connectors in Lookup activity under Managed virtual network</td><td>Always Encrypted is supported for SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Synapse Analytics in the Lookup activity under managed virtual network.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
-
-<tr><td><b>Integration runtime</b></td><td>New UI layout in Azure IR creation and edit page</td><td>The UI layout of the IR creation and edit page now uses tab style for Settings, Virtual network, and Data flow runtime.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-ui-layout-in-azure-integration-runtime-creation-and-edit/ba-p/3248237">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Orchestration</b></td><td>Transform data by using the Script activity</td><td>You can use a Script activity to invoke a SQL script in SQL Database, Azure Synapse Analytics, SQL Server, Oracle, or Snowflake.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/execute-sql-statements-using-the-new-script-activity-in-azure/ba-p/3239969">Learn more</a></td></tr>
-<tr><td>Web activity timeout improvement</td><td>You can configure response timeout in a Web activity to prevent it from timing out if the response period is more than one minute, especially in the case of synchronous APIs.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307">Learn more</a></td></tr>
-
-</table>
-
-## February 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=4><b>Data flow</b></td><td>Parameterized linked services supported in mapping data flows</td><td>You can now use your parameterized linked services in mapping data flows to make your data flow pipelines generic and flexible.<br><a href="parameterize-linked-services.md?tabs=data-factory">Learn more</a></td></tr>
-<tr><td>SQL Database incremental source extract available in data flow (public preview)</td><td>A new option has been added on mapping data flow SQL Database sources called <i>Enable incremental extract (preview)</i>. Now you can automatically pull only the rows that have changed on your SQL Database sources by using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-<tr><td>Four new connectors available for mapping data flows (public preview)</td><td>Data Factory now supports four new connectors (public preview) for mapping data flows: Quickbase connector, Smartsheet connector, TeamDesk connector, and Zendesk connector.<br><a href="connector-overview.md?tabs=data-factory">Learn more</a></td></tr>
-<tr><td>Azure Cosmos DB (SQL API) for mapping data flow now supports inline mode</td><td>Azure Cosmos DB (SQL API) for mapping data flow can now use inline datasets.<br><a href="connector-azure-cosmos-db.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr>
-<tr><td>Data Factory Google AdWords connector API upgrade available</td><td>The Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user because it's enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
-
-<tr><td><b>Continuous integration and continuous delivery (CI/CD)</b></td><td>Cross tenant Azure DevOps support</td><td>Configure a repository using Azure DevOps Git not in the same tenant as the Azure Data Factory。<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
-
-<tr><td><b>Region expansion</b></td><td>Data Factory is now available in West US3 and Jio India West</td><td>Data Factory is now available in two new regions: West US3 and Jio India West. You can colocate your ETL workflow in these new regions if you're using these regions to store and manage your modern data warehouse. You can also use these regions for business continuity and disaster recovery purposes if you need to fail over from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
-
-<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory (Azure AD) tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
-</table>
+Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
-## January 2022
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=5><b>Data flow</b></td><td>Quick reuse is now automatic in all Azure IRs that use Time to Live (TTL)</td><td>You no longer need to manually specify "quick reuse." Data Factory mapping data flows can now start up subsequent data flow activities in under five seconds after you set a TTL.<br><a href="concepts-integration-runtime-performance.md#time-to-live">Learn more</a></td></tr>
-<tr><td>Retrieve your custom Assert description</td><td>In the Assert transformation, you can define your own dynamic description message. You can use the new function <b>assertErrorMessage()</b> to retrieve the row-by-row message and store it in your destination data.<br><a href="data-flow-expressions-usage.md#assertErrorMessages">Learn more</a></td></tr>
-<tr><td>Automatic schema detection in Parse transformation</td><td>A new feature added to the Parse transformation makes it easy to automatically detect the schema of an embedded complex field inside a string column. Select the <b>Detect schema</b> button to set your target schema automatically.<br><a href="data-flow-parse.md">Learn more</a></td></tr>
-<tr><td>Support Dynamics 365 connector as both sink and source</td><td>You can now connect directly to Dynamics 365 to transform your Dynamics data at scale by using the new mapping data flow connector for Dynamics 365.<br><a href="connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
-<tr><td>Always Encrypted SQL connections now available in data flows</td><td>Always Encrypted can now source transformations in SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse when you use data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Data movement</b></td><td>Data Factory Azure Databricks Delta Lake connector supports new authentication types</td><td>Data Factory Databricks Delta Lake connector now supports two more authentication types: system-assigned managed identity authentication and user-assigned managed identity authentication.<br><a href="connector-azure-databricks-delta-lake.md">Learn more</a></td></tr>
-<tr><td>Data Factory Copy activity supports upsert in several more connectors</td><td>Data Factory Copy activity now supports upsert while it sinks data to SQL Server, SQL Database, SQL Managed Instance, and Azure Synapse.<br><a href="connector-overview.md">Learn more</a></td></tr>
-
-</table>
-
-## December 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=9><b>Data flow</b></td><td>Dynamics connector as native source and sink for mapping data flows</td><td>The Dynamics connector is now supported as source and sink for mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
-<tr><td>Native change data capture (CDC) is now natively supported</td><td>CDC is now natively supported in Data Factory for Azure Cosmos DB, Blob Storage, Data Lake Storage Gen1, Data Lake Storage Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/cosmosdb-change-feed-is-supported-in-adf-now/ba-p/3037011">Learn more</a></td></tr>
-<tr><td>Flowlets public preview</td><td>The flowlets public preview allows data flow developers to build reusable components to easily build composable data transformation logic.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-flowlets-preview-for-adf-and-synapse/ba-p/3030699">Learn more</a></td></tr>
-<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Azure Synapse lake databases.<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
-<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output destinations from Power Query in Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
-<tr><td>External Call transformation support</td><td>Extend the functionality of mapping data flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third-party service row by row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
-<tr><td>Enable quick reuse by Azure Synapse mapping data flows with TTL support</td><td>Azure Synapse mapping data flows now support quick reuse by setting a TTL in the Azure IR. Using a setting enables your subsequent data flow activities to execute in under five seconds.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-<tr><td>Assert transformation</td><td>Easily add data quality, data domain validation, and metadata checks to your Data Factory pipelines by using the Assert transformation in mapping data flows.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
-<tr><td>IntelliSense support in expression builder for more productive pipeline authoring experiences</td><td>IntelliSense support in expression builder and dynamic content authoring makes Data Factory and Azure Synapse pipeline developers more productive while they write complex expressions in their data pipelines.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459">Learn more</a></td></tr>
-
-</table>
-
-## November 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr>
- <td><b>Continuous integration and continuous delivery (CI/CD)</b></td>
- <td>GitHub integration improvements</td>
- <td>Improvements in Data Factory and GitHub integration remove limits on 1,000 Data Factory resources per resource type, such as datasets and pipelines. For large data factories, this change helps mitigate the impact of the GitHub API rate limit.<br><a href="source-control.md">Learn more</a></td>
- </tr>
-
-<tr><td rowspan=3><b>Data flow</b></td><td>Set a custom error code and error message with the Fail activity</td><td>Fail activity enables ETL developers to set the error message and custom error code for a Data Factory pipeline.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
-<tr><td>External call transformation</td><td>Mapping data flows External Call transformation enables ETL developers to use transformations and data enrichments provided by REST endpoints or third-party API services.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
-<tr><td>Synapse quick reuse</td><td>When you execute data flow in Synapse Analytics, use the TTL feature. The TTL feature uses the quick reuse feature so that sequential data flows will execute within a few seconds. You can set the TTL when you configure an Azure IR.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-
-<tr><td rowspan=3><b>Data movement</b></td><td>Copy activity supports reading data from FTP or SFTP without chunking</td><td>Automatically determine the file length or the relevant offset to be read when you copy data from an FTP or SFTP server. With this capability, Data Factory automatically connects to the FTP or SFTP server to determine the file length. After the length is determined, Data Factory divides the file into multiple chunks and reads them in parallel.<br><a href="connector-ftp.md">Learn more</a></td></tr>
-<tr><td><i>UTF-8 without BOM</i> support in Copy activity</td><td>Copy activity supports writing data with encoding the type <i>UTF-8 without BOM</i> for JSON and delimited text datasets.</td></tr>
-<tr><td>Multicharacter column delimiter support</td><td>Copy activity supports using multicharacter column delimiters for delimited text datasets.</td></tr>
-
-<tr>
- <td><b>Integration runtime</b></td>
- <td>Run any process anywhere in three steps with SQL Server Integration Services (SSIS) in Data Factory</td>
- <td>Learn how to use the best of Data Factory and SSIS capabilities in a pipeline. A sample SSIS package with parameterized properties helps you get a jump-start. With Data Factory Studio, the SSIS package can be easily dragged and dropped into a pipeline and used as part of an Execute SSIS Package activity.<br><br>This capability enables you to run the Data Factory pipeline with an SSIS package on self-hosted IRs or SSIS IRs. By providing run-time parameter values, you can use the powerful capabilities of Data Factory and SSIS capabilities together. This article illustrates three steps to run any process, which can be any executable, such as an application, program, utility, or batch file, anywhere.
-<br><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-process-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2962609">Learn more</a></td>
- </tr>
-</table>
-
-## October 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
-<tr><td rowspan=3><b>Data flow</b></td><td>Azure Data Explorer and Amazon Web Services (AWS) S3 connectors</td><td>The Microsoft Data Integration team has released two new connectors for mapping data flows. If you're using Azure Synapse, you can now connect directly to your AWS S3 buckets for data transformations. In both Data Factory and Azure Synapse, you can now natively connect to your Azure Data Explorer clusters in mapping data flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/mapping-data-flow-gets-new-native-connectors/ba-p/2866754">Learn more</a></td></tr>
-<tr><td>Power Query activity leaves preview for GA</td><td>The Data Factory Power Query pipeline activity is now generally available. This new feature provides scaled-out data prep and data wrangling for citizen integrators inside the Data Factory browser UI for an integrated experience for data engineers. The Power Query data wrangling feature in Data Factory provides a powerful, easy-to-use pipeline capability to solve your most complex data integration and ETL patterns in a single service.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/data-wrangling-at-scale-with-adf-s-power-query-activity-now/ba-p/2824207">Learn more</a></td></tr>
-<tr><td>New Stringify data transformation in mapping data flows</td><td>Mapping data flows adds a new data transformation called Stringify to make it easy to convert complex data types like structs and arrays into string form. These data types then can be sent to structured output destinations.<br><a href="data-flow-stringify.md">Learn more</a></td></tr>
-
-<tr>
- <td><b>Integration runtime</b></td>
- <td>Express virtual network injection for SSIS IR (public preview)</td>
- <td>The SSIS IR now supports express virtual network injection.<br>
- Learn more:<br>
- <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of virtual network injection for SSIS IR</a><br>
- <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express virtual network injection for SSIS IR</a><br>
- <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express virtual network injection for SSIS IR</a>
- </td>
-</tr>
-
-<tr><td rowspan=2><b>Security</b></td><td>Azure Key Vault integration improvement</td><td>Key Vault integration now has dropdowns so that users can select the secret values in the linked service. This capability increases productivity because users aren't required to type in the secrets, which could result in human error.</td></tr>
-<tr><td>Support for user-assigned managed identity in Data Factory</td><td>Credential safety is crucial for any enterprise. The Data Factory team is committed to making the data engineering process secure yet simple for data engineers. User-assigned managed identity (preview) is now supported in all connectors and linked services that support Azure AD-based authentication.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-for-user-assigned-managed-identity-in-azure-data-factory/ba-p/2841013">Learn more</a></td></tr>
-</table>
-
-## September 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
-
- <tr><td><b>Continuous integration and continuous delivery</b></td><td>Expanded CI/CD capabilities</td><td>You can now create a new Git branch based on any other branch in Data Factory.<br><a href="source-control.md#version-control">Learn more</a></td></tr>
-
-<tr><td rowspan=3><b>Data movement</b></td><td>Amazon Relational Database Service (RDS) for Oracle sources</td><td>The Amazon RDS for Oracle sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-oracle.md">Learn more</a></td></tr>
-<tr><td>Amazon RDS for SQL Server sources</td><td>The Amazon RDS for the SQL Server sources connector is now available in both Data Factory and Azure Synapse.<br><a href="connector-amazon-rds-for-sql-server.md">Learn more</a></td></tr>
-<tr><td>Support parallel copy from Azure Database for PostgreSQL</td><td>The Azure Database for PostgreSQL connector now supports parallel copy operations.<br><a href="connector-azure-database-for-postgresql.md">Learn more</a></td></tr>
-
-<tr><td rowspan=3><b>Data flow</b></td><td>Use Data Lake Storage Gen2 to execute pre- and post-processing commands</td><td>Hadoop Distributed File System pre- and post-processing commands can now be executed by using Data Lake Storage Gen2 sinks in data flows.<br><a href="connector-azure-data-lake-storage.md#pre-processing-and-post-processing-commands">Learn more</a></td></tr>
-<tr><td>Edit data flow properties for existing instances of the Azure IR </td><td>The Azure IR has been updated to allow editing of data flow properties for existing IRs. You can now modify data flow compute properties without needing to create a new Azure IR.<br><a href="concepts-integration-runtime.md">Learn more</a></td></tr>
-<tr><td>TTL setting for Azure Synapse to improve pipeline activities execution startup time</td><td>Azure Synapse has added TTL to the Azure IR to enable your data flow pipeline activities to begin execution in seconds, which greatly minimizes the runtime of your data flow pipelines.<br><a href="control-flow-execute-data-flow-activity.md#data-flow-integration-runtime">Learn more</a></td></tr>
-
-<tr><td><b>Integration runtime</b></td><td>Data Factory managed virtual network GA</td><td>You can now provision the Azure IR as part of a managed virtual network and use private endpoints to securely connect to supported data stores. Data traffic goes through Azure Private Links, which provides secured connectivity to the data source. It also prevents data exfiltration to the public internet.<br><a href="managed-virtual-network-private-endpoint.md">Learn more</a></td></tr>
-
-<tr><td rowspan=2><b>Orchestration</b></td><td>Operationalize and provide SLA for data pipelines</td><td>The new Elapsed Time Pipeline Run metric, combined with Data Factory alerts, empowers data pipeline developers to better deliver SLAs to their customers. Now you can tell us how long a pipeline should run, and we'll notify you proactively when the pipeline runs longer than expected.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/operationalize-and-provide-sla-for-data-pipelines/ba-p/2767768">Learn more</a></td></tr>
-<tr><td>Fail activity (public preview)</td><td>The new Fail activity allows you to throw an error in a pipeline intentionally for any reason. For example, you might use the Fail activity if a Lookup activity returns no matching data or a custom activity finishes with an internal error.<br><a href="control-flow-fail-activity.md">Learn more</a></td></tr>
-</table>
-
-## August 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+## August 2022
- <tr><td><b>Continuous integration and continuous delivery</b></td><td>CI/CD improvements with GitHub support in Azure Government and Azure China</td><td>We've added support for GitHub in Azure for US Government and Azure China.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918">Learn more</a></td></tr>
+### Data flow
+- Appfigures connector added as Source (Preview) [Learn more](connector-appfigures.md)
+- Cast transformation added ΓÇô visually convert data types [Learn more](data-flow-cast.md)
+- New UI for inline datasets - categories added to easily find data sources [Learn more](data-flow-source.md#inline-datasets)
-<tr><td rowspan=2><b>Data movement</b></td><td>The Azure Cosmos DB API for MongoDB connector supports versions 3.6 and 4.0 in Data Factory</td><td>The Data Factory Azure Cosmos DB API for MongoDB connector now supports server versions 3.6 and 4.0.<br><a href="connector-azure-cosmos-db-mongodb-api.md">Learn more</a></td></tr>
-<tr><td>Enhance using COPY statement to load data into Azure Synapse</td><td>The Data Factory Azure Synapse connector now supports staged copy and copy source with *.* as wildcardFilename for the COPY statement.<br><a href="connector-azure-sql-data-warehouse.md#use-copy-statement">Learn more</a></td></tr>
+### Data movement
+Service principal authentication type added for Azure Blob storage [Learn more](connector-azure-blob-storage.md?tabs=data-factory#service-principal-authentication)
-<tr><td><b>Data flow</b></td><td>REST endpoints are available as source and sink in data flow</td><td>Data flows in Data Factory and Azure Synapse now support REST endpoints as both a source and sink with full support for both JSON and XML payloads.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/rest-source-and-sink-now-available-for-data-flows/ba-p/2596484">Learn more</a></td></tr>
+### Developer productivity
+- Default activity time out changed from 7 days to 12 hours [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729)
+- New data factory creation experience - one click to have your factory ready within seconds [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-experience-for-creating-data-factory-within-seconds/ba-p/3561249)
+- Expression builder UI update ΓÇô categorical tabs added for easier use [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196)
-<tr><td><b>Integration runtime</b></td><td>Diagnostic tool is available for self-hosted IR</td><td>A diagnostic tool for self-hosted IR is designed to provide a better user experience and help users to find potential issues. The tool runs a series of test scenarios on the self-hosted IR machine. Every scenario has typical health check cases for common issues.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/diagnostic-tool-for-self-hosted-integration-runtime/ba-p/2634905">Learn more</a></td></tr>
+### Continuous integration and continuous delivery (CI/CD)
+When CI/CD integrating ARM template, instead of turning off all triggers, it can exclude triggers that didn't change in deployment [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvements-related-to-pipeline-triggers-deployment/ba-p/3605064)
-<tr><td><b>Orchestration</b></td><td>Custom event trigger with advanced filtering option GA</td><td>You can now create a trigger that responds to a custom topic posted to Azure Event Grid. You can also use advanced filtering to get fine-grain control over what events to respond to.<br><a href="how-to-create-custom-event-trigger.md">Learn more</a></td></tr>
-</table>
+### Video summary
-## July 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+> [!VIDEO https://www.youtube.com/embed?v=KCJ2F6Y_nfo&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=5]
-<tr><td><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (public preview)</td><td>Now you can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool (public preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr>
+## July 2022
-<tr><td><b>Data flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions enables data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
+### Data flow
-<tr><td><b>Integration runtime</b></td><td>Five new regions are available in Data Factory managed virtual network (public preview)</td><td>Five new regions, China East2, China North2, US Government Arizona, US Government Texas, and US Government Virginia, are available in the Data Factory managed virtual network (public preview).<br></td></tr>
+- Asana connector added as source. [Learn more](connector-asana.md)
+- Three new data transformation functions now supported. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/3-new-data-transformation-functions-in-adf/ba-p/3582738)
+ - [collectUnique()](data-flow-expressions-usage.md#collectUnique) - Create a new collection of unique values in an array.
+ - [substringIndex()](data-flow-expressions-usage.md#substringIndex) - Extract the substring before n occurrences of a delimiter.
+ - [topN()](data-flow-expressions-usage.md#topN) - Return the top n results after sorting your data.
+- Refetch from source available in Refresh for data source change scenarios. [Learn more](concepts-data-flow-debug-mode.md#data-preview)
+- User defined functions (GA) - Create reusable and customized expressions to avoid building complex logic over and over. [Learn more](concepts-data-flow-udf.md) [Video](https://www.youtube.com/watch?v=ZFTVoe8eeOc&t=170s)
+- Easier configuration on data flow runtime - choose compute size among Small, Medium and Large to pre-configure all integration runtime settings. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-makes-it-easy-to-select-azure-ir-size-for-data-flows/ba-p/3578033)
-<tr><td rowspan=2><b>Developer productivity</b></td><td>Data Factory home page improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. A few sections are introduced on the home page to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
-<tr><td>New landing page for Data Factory Studio</td><td>The landing page for the Data Factory pane in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
-</table>
+### Continuous integration and continuous delivery (CI/CD)
-## June 2021
-<br>
-<table>
-<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+Include Global parameters supported in ARM template. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/ci-cd-improvement-using-global-parameters-in-azure-data-factory/ba-p/3557265#M665)
+### Developer productivity
-<tr><td rowspan=4 valign="middle"><b>Data movement</b></td><td>New user experience with Data Factory Copy Data tool</td><td>The redesigned Copy Data tool is now available with improved data ingestion experience.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/a-re-designed-copy-data-tool-experience/ba-p/2380634">Learn more</a></td></tr>
-<tr><td>MongoDB and MongoDB Atlas are supported as both source and sink</td><td>This improvement supports copying data between any supported data store and MongoDB or MongoDB Atlas database.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-connectors-available-in-adf-mongodb-and-mongodb-atlas-are/ba-p/2441482">Learn more</a></td></tr>
-<tr><td>Always Encrypted is supported for SQL Database, SQL Managed Instance, and SQL Server connectors as both source and sink</td><td>Always Encrypted is available in Data Factory for SQL Database, SQL Managed Instance, and SQL Server connectors for the Copy activity.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/azure-data-factory-copy-now-supports-always-encrypted-for-both/ba-p/2461346">Learn more</a></td></tr>
-<tr><td>Setting custom metadata is supported in Copy activity when sinking to Data Lake Storage Gen2 or Blob Storage</td><td>When you write to Data Lake Storage Gen2 or Blob Storage, the Copy activity supports setting custom metadata or storage of the source file's last modified information as metadata.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-setting-custom-metadata-when-writing-to-blob-adls-gen2/ba-p/2545506#M490">Learn more</a></td></tr>
+Be a part of Azure Data Factory studio preview features - Experience the latest Azure Data Factory capabilities and be the first to share your feedback. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-the-azure-data-factory-studio-preview-experience/ba-p/3563880)
-<tr><td rowspan=4 valign="middle"><b>Data flow</b></td><td>SQL Server is now supported as a source and sink in data flows</td><td>SQL Server is now supported as a source and sink in data flows. Follow the link for instructions on how to configure your networking by using the Azure IR managed virtual network feature to talk to your SQL Server on-premises and cloud VM-based instances.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/new-data-flow-connector-sql-server-as-source-and-sink/ba-p/2406213">Learn more</a></td></tr>
-<tr><td>Dataflow Cluster quick reuse now enabled by default for all new Azure IRs</td><td>The popular data flow quick startup reuse feature is now generally available for Data Factory. All new Azure IRs now have quick reuse enabled by default.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/how-to-startup-your-data-flows-execution-in-less-than-5-seconds/ba-p/2267365">Learn more</a></td></tr>
-<tr><td>Power Query (public preview) activity</td><td>You can now build complex field mappings to your Power Query sink by using Data Factory data wrangling. The sink is now configured in the pipeline in the Power Query (public preview) activity to accommodate this update.<br><a href="wrangling-tutorial.md">Learn more</a></td></tr>
-<tr><td>Updated data flows monitoring UI in Data Factory</td><td>Data Factory has a new update for the monitoring UI to make it easier to view your data flow ETL job executions and quickly identify areas for performance tuning.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/updated-data-flows-monitoring-ui-in-adf-amp-synapse/ba-p/2432199">Learn more</a></td></tr>
+### Video summary
-<tr><td><b>SQL Server Integration Services</b></td><td>Run any SQL statements or scripts anywhere in three steps with SSIS in Data Factory</td><td>This post provides three steps to run any SQL statements or scripts anywhere with SSIS in Data Factory.<ol><li>Prepare your self-hosted IR or SSIS IR.</li><li>Prepare an Execute SSIS Package activity in Data Factory pipeline.</li><li>Run the Execute SSIS Package activity on your self-hosted IR or SSIS IR.</li></ol><a href="https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244">Learn more</a></td></tr>
-</table>
+> [!VIDEO https://www.youtube.com/embed?v=EOVVt4qYvZI&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=4]
## More information
+- [What's new archive](whats-new-archive.md)
+- [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv)
- [Blog - Azure Data Factory](https://techcommunity.microsoft.com/t5/azure-data-factory/bg-p/AzureDataFactoryBlog) - [Stack Overflow forum](https://stackoverflow.com/questions/tagged/azure-data-factory) - [Twitter](https://twitter.com/AzDataFactory?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor)
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-event-notifications.md
description: Learn to interpret various event types and their different notification messages. Previously updated : 03/01/2022 Last updated : 09/20/2022
The data in the corresponding notification (if synchronously executed by the ser
} ```
-This data is the information that will go in the `data` field of the lifecycle notification message.
+>[!NOTE]
+> Azure Digital Twins currently doesn't support [filtering events](how-to-manage-routes.md#filter-events) based on fields within an array. This includes filtering on properties within a `patch` section of a digital twin change notification.
## Digital twin lifecycle notifications
Here's an example of the data for a create or delete relationship notification.
## Digital twin telemetry messages
-*Telemetry messages* are received in Azure Digital Twins from connected devices that collect and send measurements.
+Digital twins can use the [SendTelemetry API](/rest/api/digital-twins/dataplane/twins/digitaltwins_sendtelemetry) to emit *telemetry messages* and send them to egress endpoints.
### Properties
Here are the fields in the body of a telemetry message.
| Name | Value | | | | | `id` | Identifier of the notification, which is provided by the customer when calling the telemetry API. |
-| `source` | Fully qualified name of the twin that the telemetry event was sent to. Uses the following format: `<your-Digital-Twin-instance>.api.<your-region>.digitaltwins.azure.net/<twin-ID>`. |
+| `source` | Fully qualified name of the twin that the telemetry event was sent from. Uses the following format: `<your-Digital-Twin-instance>.api.<your-region>.digitaltwins.azure.net/<twin-ID>`. |
| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). | | `type` | `microsoft.iot.telemetry` |
-| `data` | The telemetry message that has been sent to twins. The payload is unmodified and may not align with the schema of the twin that has been sent the telemetry. |
+| `data` | The telemetry message being sent from the twin. The payload does not need to align with any schema defined in your Azure Digital Twins instance. |
| `dataschema` | The data schema is the model ID of the twin or the component that emits the telemetry. For example, `dtmi:example:com:floor4;2`. | | `datacontenttype` | `application/json` | | `traceparent` | A W3C trace context for the event. | ### Body details
-The body contains the telemetry measurement along with some contextual information about the device.
+The body contains the telemetry measurement along with some contextual information about the twin.
Here's an example telemetry message body:
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
description: Learn how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map. Previously updated : 02/22/2022 Last updated : 09/27/2022 + # Optional fields. Don't forget to remove # if you need a field. #
#
-# Use Azure Digital Twins to update an Azure Maps indoor map
+# Integrate Azure Digital Twins data into an Azure Maps indoor map
-This article walks through the steps required to use Azure Digital Twins data to update information displayed on an *indoor map* using [Azure Maps](../azure-maps/about-azure-maps.md). Azure Digital Twins stores a graph of your IoT device relationships and routes telemetry to different endpoints, making it the perfect service for updating informational overlays on maps.
+This article shows how to use Azure Digital Twins data to update information displayed on an *indoor map* from [Azure Maps](../azure-maps/about-azure-maps.md). Because Azure Digital Twins stores a graph of your IoT device relationships and routes telemetry to different endpoints, it's a great service for updating informational overlays on maps.
-This guide will cover:
+This guide covers the following information:
1. Configuring your Azure Digital Twins instance to send twin update events to a function in [Azure Functions](../azure-functions/functions-overview.md). 2. Creating a function to update an Azure Maps indoor maps feature stateset.
-3. How to store your maps ID and feature stateset ID in the Azure Digital Twins graph.
+3. Storing your maps ID and feature stateset ID in the Azure Digital Twins graph.
+
+## Get started
+
+This section sets additional context for the information in this article.
### Prerequisites
-* Follow the Azure Digital Twins in [Connect an end-to-end solution](./tutorial-end-to-end.md).
- * You'll be extending this twin with another endpoint and route. You'll also be adding another function to your function app from that tutorial.
-* Follow the Azure Maps in [Use Azure Maps Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) to create an Azure Maps indoor map with a *feature stateset*.
- * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps tutorial above, the feature stateset stores room status that you'll be displaying on a map.
- * You'll need your feature **stateset ID** and Azure Maps **subscription key**.
+Before proceeding with this article, start by setting up your individual Azure Digital Twins and Azure Maps resources.
+
+* For Azure Digital Twins: Follow the instructions in [Connect an end-to-end solution](./tutorial-end-to-end.md) to set up an Azure Digital Twins instance with a sample twin graph and simulated data flow.
+ * In this article, you'll extend that solution with another endpoint and route. You'll also add another function to the function app from that tutorial.
+* For Azure Maps: Follow the instructions in [Use Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) and [Create a feature stateset](../azure-maps/tutorial-creator-feature-stateset.md) to create an Azure Maps indoor map with a *feature stateset*.
+ * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps instructions above, the feature stateset stores room status that you'll be displaying on a map.
+ * You'll need your Azure Maps **subscription key**, feature **stateset ID**, and **mapConfiguration**.
### Topology
The image below illustrates where the indoor maps integration elements in this t
:::image type="content" source="media/how-to-integrate-maps/maps-integration-topology.png" alt-text="Diagram of Azure services in an end-to-end scenario, highlighting the Indoor Maps Integration piece." lightbox="media/how-to-integrate-maps/maps-integration-topology.png":::
-## Create a function to update a map when twins update
+## Route twin update notifications from Azure Digital Twins
-First, you'll create a route in Azure Digital Twins to forward all twin update events to an Event Grid topic. Then, you'll use a function to read those update messages and update a feature stateset in Azure Maps.
-
-## Create a route and filter to twin update notifications
-
-Azure Digital Twins instances can emit twin update events whenever a twin's state is updated. The Azure Digital Twins [Connect an end-to-end solution](./tutorial-end-to-end.md) linked above walks through a scenario where a thermometer is used to update a temperature attribute attached to a room's twin. You'll be extending that solution by subscribing to update notifications for twins, and using that information to update your maps.
+Azure Digital Twins instances can emit twin update events whenever a twin's state is updated. The Azure Digital Twins [Connect an end-to-end solution](./tutorial-end-to-end.md) linked above walks through a scenario where a thermometer is used to update a temperature attribute attached to a room's twin. This tutorial extends that solution by subscribing an Azure function to update notifications from twins, and using that function to update your maps.
This pattern reads from the room twin directly, rather than the IoT device, which gives you the flexibility to change the underlying data source for temperature without needing to update your mapping logic. For example, you can add multiple thermometers or set this room to share a thermometer with another room, all without needing to update your map logic.
+First, you'll create a route in Azure Digital Twins to forward all twin update events to an Event Grid topic.
+ 1. Create an Event Grid topic, which will receive events from your Azure Digital Twins instance, using the CLI command below: ```azurecli-interactive az eventgrid topic create --resource-group <your-resource-group-name> --name <your-topic-name> --location <region>
This pattern reads from the room twin directly, rather than the IoT device, whic
az dt route create --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'" ```
-## Create a function to update maps
+## Create an Azure function to receive events and update maps
-You're going to create an Event Grid-triggered function inside your function app from the end-to-end tutorial ([Connect an end-to-end solution](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
+In this section, you'll create a function that listens for events sent to the Event Grid topic. The function will read those update notifications and send corresponding updates to an Azure Maps feature stateset, to update the temperature of one room.
-See the following document for reference info: [Azure Event Grid trigger for Azure Functions](../azure-functions/functions-bindings-event-grid-trigger.md).
+In the Azure Digital Twins tutorial [prerequisite](#prerequisites), you created a function app to store Azure functions Azure Digital Twins. Now, create a new [Event Grid-triggered Azure function](../azure-functions/functions-bindings-event-grid-trigger.md) inside the function app.
Replace the function code with the following code. It will filter out only updates to space twins, read the updated temperature, and send that information to Azure Maps.
az functionapp config appsettings set --name <your-function-app-name> --resource
az functionapp config appsettings set --name <your-function-app-name> --resource-group <your-resource-group> --settings "statesetID=<your-Azure-Maps-stateset-ID>" ```
-### View live updates on your map
+### View live updates in the map
To see live-updating temperature, follow the steps below: 1. Begin sending simulated IoT data by running the *DeviceSimulator* project from the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this process are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section. 2. Use [the Azure Maps Indoor module](../azure-maps/how-to-use-indoor-module.md) to render your indoor maps created in Azure Maps Creator.
- 1. Copy the HTML from the [Example: Use the Indoor Maps Module](../azure-maps/how-to-use-indoor-module.md#example-use-the-indoor-maps-module) section of the indoor maps in [Use the Azure Maps Indoor Maps module](../azure-maps/how-to-use-indoor-module.md) to a local file.
- 1. Replace the **subscription key**, **tilesetId**, and **statesetID** in the local HTML file with your values.
+ 1. Copy the example indoor map HTML file from [Example: Custom Styling: Consume map configuration in WebSDK (Preview)](../azure-maps/how-to-use-indoor-module.md#example-custom-styling-consume-map-configuration-in-websdk-preview).
+ 1. Replace the **subscription key**, **mapConfiguration**, **statesetID**, and **region** in the local HTML file with your values.
1. Open that file in your browser. Both samples send temperature in a compatible range, so you should see the color of room 121 update on the map about every 30 seconds. :::image type="content" source="media/how-to-integrate-maps/maps-temperature-update.png" alt-text="Screenshot of an office map showing room 121 colored orange.":::
-## Store your maps information in Azure Digital Twins
+## Store map information in Azure Digital Twins
Now that you have a hardcoded solution to updating your maps information, you can use the Azure Digital Twins graph to store all of the information necessary for updating your indoor map. This information would include the stateset ID, maps subscription ID, and feature ID of each map and location respectively.
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
Title: Metrics and alerts - Azure DNS
description: With this learning path, get started with Azure DNS metrics and alerts. documentationcenter: na-+ na Previously updated : 04/26/2021- Last updated : 09/27/2022+ # Azure DNS metrics and alerts
Azure DNS provides the following metrics to Azure Monitor for your DNS zones:
For more information, see [metrics definition](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkdnszones). >[!NOTE]
-> At this time, these metrics are only available for Public DNS zones hosted in Azure DNS. If you have Private Zones hosted in Azure DNS, these metrics will not provide data for those zones. In addition, the metrics and alerting feature is only supported in Azure Public cloud. Support for sovereign clouds will follow at a later time.
+> At this time, these metrics are only available for Public DNS zones hosted in Azure DNS. If you have Private Zones hosted in Azure DNS, these metrics won't provide data for those zones. In addition, the metrics and alerting feature is only supported in Azure Public cloud. Support for sovereign clouds will follow at a later time.
The most granular element that you can see metrics for is a DNS zone. You currently can't see metrics for individual resource records within a zone.
To view this metric, select **Metrics** explorer experience from the **Monitor**
## Alerts in Azure DNS
-Azure Monitor has alerting that you can configure for each available metric values. See [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md) for more information.
+Azure Monitor has alerting that you can configure for each available metric value. See [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md) for more information.
1. To configure alerting for Azure DNS zones, select **Alerts** from *Monitor* page in the Azure portal. Then select **+ New alert rule**. :::image type="content" source="./media/dns-alerts-metrics/alert-metrics.png" alt-text="Screenshot of Alert button on Monitor page.":::
-1. Click the **Select resource** link in the Scope section to open the *Select a resource* page. Filter by **DNS zones** and then select the Azure DNS zone you want as the target resource. Select **Done** once you have choose the zone.
+1. Click the **Select resource** link in the Scope section to open the *Select a resource* page. Filter by **DNS zones** and then select the Azure DNS zone you want as the target resource. Select **Done** once you've chosen the zone.
:::image type="content" source="./media/dns-alerts-metrics/select-resource.png" alt-text="Screenshot of select resource page in configuring alerts.":::
dns Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-portal.md
You can configure Azure DNS to resolve host names in your public domain. For example, if you purchased the *contoso.xyz* domain name from a domain name registrar, you can configure Azure DNS to host the *contoso.xyz* domain and resolve *`www.contoso.xyz`* to the IP address of your web server or web app.
-In this quickstart, you will create a test domain, and then create an address record to resolve *www* to the IP address *10.10.10.10*.
+In this quickstart, you'll create a test domain, and then create an address record to resolve *www* to the IP address *10.10.10.10*.
:::image type="content" source="media/dns-getstarted-portal/environment-diagram.png" alt-text="Diagram of DNS deployment environment using the Azure portal." border="false":::
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
Next, add a virtual network to the resource group that you created, and configur
![create resolver - review](./media/dns-resolver-getstarted-portal/resolver-review.png)
- After selecting **Create**, the new DNS resolver will begin deployment. This process might take a minute or two, and you'll see the status of each component as it is deployed.
+ After selecting **Create**, the new DNS resolver will begin deployment. This process might take a minute or two, and you'll see the status of each component as it's deployed.
![create resolver - status](./media/dns-resolver-getstarted-portal/resolver-status.png)
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
For example, the DNS record `www.contoso.com` is implemented using a DNS 'A' rec
When an organization is assigned an IP address block, they also acquire the right to manage the corresponding ARPA zone. The ARPA zones corresponding to the IP address blocks used by Azure are hosted and managed by Microsoft. Your ISP may host the ARPA zone for you for the IP addresses you owned. They may also allow you to host the ARPA zone in a DNS service of your choice, such as Azure DNS. > [!NOTE]
-> Forward DNS lookups and reverse DNS lookups are implemented in separate, parallel DNS hierarchies. The reverse lookup for 'www.contoso.com' is **not** hosted in the zone 'contoso.com', rather it is hosted in the ARPA zone for the corresponding IP address block. Separate zones are used for IPv4 and IPv6 address blocks.
+> Forward DNS lookups and reverse DNS lookups are implemented in separate, parallel DNS hierarchies. The reverse lookup for 'www.contoso.com' is **not** hosted in the zone 'contoso.com', rather it's hosted in the ARPA zone for the corresponding IP address block. Separate zones are used for IPv4 and IPv6 address blocks.
### IPv4
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md
Importing a zone file creates a new zone in Azure private DNS if one does not al
The following notes provide additional technical details about the zone import process.
-* The `$TTL` directive is optional, and it is supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used.
-* The `$ORIGIN` directive is optional, and it is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line (plus the terminating ".").
+* The `$TTL` directive is optional, and it's supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used.
+* The `$ORIGIN` directive is optional, and it's supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line (plus the terminating ".").
* The `$INCLUDE` and `$GENERATE` directives are not supported. * These record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT. * The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS. This is because this parameter must refer to the primary name server provided by Azure DNS. * The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data is not overwritten by the values contained in the imported zone file.
-* During Public Preview, Azure DNS supports only single-string TXT records. Multistring TXT records are be concatenated and truncated to 255 characters.
+* During Public Preview, Azure DNS supports only single-string TXT records. Multistring TXT records will be concatenated and truncated to 255 characters.
### CLI format and values
Values:
* `<zone name>` is the name of the zone. * `<zone file name>` is the path/name of the zone file to be imported.
-If a zone with this name does not exist in the resource group, it is created for you. If the zone already exists, the imported record sets are merged with existing record sets.
+If a zone with this name does not exist in the resource group, it's created for you. If the zone already exists, the imported record sets are merged with existing record sets.
### Import a zone file
To import a zone file for the zone **contoso.com**.
az group create --resource-group myresourcegroup -l westeurope ```
-2. To import the zone **contoso.com** from the file **contoso.com.txt** into a new DNS zone in the resource group **myresourcegroup**, you will run the command `az network private-dns zone import`.<BR>This command loads the zone file and parses it. The command executes a series of commands on the Azure DNS service to create the zone and all the record sets in the zone. The command reports progress in the console window, along with any errors or warnings. Because record sets are created in series, it may take a few minutes to import a large zone file.
+2. To import the zone **contoso.com** from the file **contoso.com.txt** into a new DNS zone in the resource group **myresourcegroup**, you'll run the command `az network private-dns zone import`.<BR>This command loads the zone file and parses it. The command executes a series of commands on the Azure DNS service to create the zone and all the record sets in the zone. The command reports progress in the console window, along with any errors or warnings. Because record sets are created in series, it may take a few minutes to import a large zone file.
```azurecli az network private-dns zone import -g myresourcegroup -n contoso.com -f contoso.com.txt
dns Private Resolver Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-reliability.md
+
+ Title: Resiliency in Azure DNS Private Resolver #Required; Must be "Resiliency in *your official service name*"
+description: Find out about reliability in Azure DNS Private Resolver #Required;
+++++ Last updated : 09/27/2022 #Required; mm/dd/yyyy format.
+#Customer intent: As a customer, I want to understand reliability support for Azure DNS Private Resolver. I need to avoid failures and respond to them so that I can minimize down time and data loss.
++
+# Resiliency in Azure DNS Private Resolver
+
+This article describes reliability support in Azure DNS Private Resolver, and covers both regional resiliency with [availability zones](#availability-zones) and cross-region resiliency with disaster recovery.
+
+> [!NOTE]
+> Azure DNS Private Resolver supports availability zones without any further configuration! When the service is provisioned, it's deployed across the different availability zones, and will provide zone resiliency out of the box.
+
+For a comprehensive overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+## Azure DNS Private Resolver
+
+[Azure DNS Private Resolver](dns-private-resolver-overview.md) enables you to query Azure DNS private zones from an on-premises environment, and vice versa, without deploying VM based DNS servers. You no longer need to provision IaaS based solutions on your virtual networks to resolve names registered on Azure private DNS zones. You can configure conditional forwarding of domains back to on-premises, multicloud, and public DNS servers.
+
+## Availability zones
+
+For more information about availability zones, see [Regions and availability zones](/azure/availability-zones/az-overview).
+
+### Prerequisites
+
+For a list of regions that support availability zones, see [Azure regions with availability zones](/azure/availability-zones/az-region#azure-regions-with-availability-zones). If your Azure DNS Private Resolver is located in one of the regions listed, you don't need to take any other action beyond provisioning the service.
+
+#### Enabling availability zones with private resolver
+
+To enable AZ support for Azure DNS Private Resolver, you do not need to take further steps beyond provisioning the service. Just create the private resolver in the region with AZ support, and it will be available across all AZs.
+
+For detailed steps on how to provision the service, see [Create an Azure private DNS Resolver using the Azure portal](dns-private-resolver-get-started-portal.md).
+
+### Fault tolerance
+
+During a zone-wide outage, no action is required during zone recovery. The service will self-heal and rebalance to take advantage of the healthy zone automatically. The service is provisioned across all the AZs.
+
+## Disaster recovery and cross-region failover
+
+For cross-region failover in Azure DNS Private Resolver, see [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md).
+
+In the event of a regional outage, use the same design as that described in [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md). When you configure this failover design, you can keep resolving names using the other active regions, and also increase the resiliency of your workloads.
+
+All instances of Azure DNS Private Resolver run as Active-Active within the same region.
+
+The service health is onboarded to [Azure Resource Health](/azure/service-health/resource-health-overview), so you'll be able to check for health notifications when you subscribe to them. For more information, see [Create activity log alerts on service notifications using the Azure portal](/azure/service-health/alerts-activity-log-service-notifications-portal).
+
+Also see the [SLA for Azure DNS](https://azure.microsoft.com/support/legal/sla/dns/v1_1/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/availability-zones/overview)
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
Previously updated : 08/18/2022 Last updated : 09/27/2022 #Customer intent: As an administrator, I want to avoid having a single point of failure for DNS resolution. # Tutorial: Set up DNS failover using private resolvers
-This article details how to eliminate a single point of failure in your on-premises DNS services by using two or more Azure DNS private resolvers deployed across different regions. DNS failover is enabled by assigning a local resolver as your primary DNS and the resolver in an adjacent region as secondary DNS.
+This article details how to eliminate a single point of failure in your on-premises DNS services by using two or more Azure DNS private resolvers deployed across different regions. DNS failover is enabled by assigning a local resolver as your primary DNS and the resolver in an adjacent region as secondary DNS. If the primary DNS server fails to respond, DNS clients automatically retry using the secondary DNS server.
> [!IMPORTANT] > Azure DNS Private Resolver is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
energy-data-services Concepts Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md
**Domain Data Management Service (DDMS)** ΓÇô is a platform component that extends [OSDU&trade;](https://osduforum.org) core data platform with domain specific model and optimizations. DDMS is a mechanism of a platform extension that: * delivers optimized handling of data for each (non-overlapping) "domain."
-* single vertical discipline or business area, for example, Petrophysics, Geophysics, Seismic
-* a functional aspect of one or more vertical disciplines or business areas, for example, Earth Model
+* pertains to a single vertical discipline or business area, for example, Petrophysics, Geophysics, Seismic
+* serves a functional aspect of one or more vertical disciplines or business areas, for example, Earth Model
* delivers high performance capabilities not supported by OSDU&trade; generic normal APIs.
-* can help achieve the extension of OSDU&trade; scope to new business areas.
+* helps achieve the extension of OSDU&trade; scope to new business areas.
* may be developed in a distributed manner with separate resources/sponsors. OSDU&trade; Technical Standard defines the following types of OSDU&trade; application types:
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
Title: Availability zones and disaster recovery | Microsoft Docs
+ Title: Event Grid support for availability zones and disaster recovery
description: Describes how Azure Event Grid supports availability zones and disaster recovery. Previously updated : 07/18/2022 Last updated : 09/23/2022
-# Availability zones and disaster recovery
-Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics, domains, and partner topics. This article gives you more details about Event Grid's support for availability zones and disaster recovery.
+# In-region recovery using availability zones and geo-disaster recovery across regions (Azure Event Grid)
-## Availability zones
+This article describes how Azure Event Grid supports automatic in-region recovery of your Event Grid resource definitions and data when a failure occurs in a region that has availability zones. It also describes how Event Grid supports automatic recovery of Event Grid resource definitions (no data) to another region when a failure occurs in a region that has a paired region.
-Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
-Azure Event Grid event subscription configurations and events are automatically replicated across data centers in the availability zone, and replicated in the three availability zones (when available) in the region specified to provide automatic in-region recovery of your data in case of a failure in the region. See [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) to learn more about the supported regions with availability zones.
+## In-region recovery using availability zones
-Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+Azure availability zones are physically separate locations within each Azure region that are tolerant to local failures. They're connected by a high-performance network with a round-trip latency of less than 2 milliseconds. Each availability zone is composed of one or more data centers equipped with independent power, cooling, and networking infrastructure. If one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. For more information about availability zones, see [Regions and availability zones](../availability-zones/az-overview.md). In this article, you can also see the list of regions that have availability zones.
-With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
-
-If a region supports availability zones, the event data is replicated across availability zones though.
+Event Grid resource definitions for topics, system topics, domains, and event subscriptions and event data are automatically replicated across three availability zones ([when available](../availability-zones/az-overview.md#azure-regions-with-availability-zones)) in the region. When there's a failure in one of the availability zones, Event Grid resources **automatically failover** to another availability zone without any human intervention. Currently, it isn't possible for you to control (enable or disable) this feature. When an existing region starts supporting availability zones, existing Event Grid resources would be automatically failed over to take advantage of this feature. No customer action is required.
:::image type="content" source="../availability-zones/media/availability-zones-region-geography.png" alt-text="Diagram that shows availability zones that protect against localized disasters and regional or large geography disasters by using another region.":::
-## Disaster recovery
+## Geo-disaster recovery across regions
-Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics and domains. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
+When an Azure region experiences a prolonged outage, you might be interested in failover options to an alternate region for business continuity. Many Azure regions have geo-pairs, and some don't. For a list of regions that have paired regions, see [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
-> [!NOTE]
-> Event data is not replicated to the paired region, only the metadata is replicated.
+For regions with a geo-pair, Event Grid offers a capability to fail over the publishing traffic to the paired region for custom topics, system topics, and domains. Behind the scenes, Event Grid automatically synchronizes resource definitions of topics, system topics, domains, and event subscriptions to the paired region. However, event data isn't replicated to the paired region. In the normal state, events are stored in the region you selected for that resource. When there's a region outage and Microsoft initiates the failover, new events will begin to flow to the geo-paired region and are dispatched from there with no intervention from you. Events published and accepted in the original region are dispatched from there after the outage is mitigated.
-Microsoft offers options to recover from a failure, you can opt to enable recovery to a paired region where available or disable recovery to a paired region to manage your own recovery. See [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to learn more about the supported paired regions. The failover is nearly instantaneous once initiated. To learn more about how to implement your own failover strategy, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) .
+Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. Microsoft reserves the right to determine when this option will be exercised. This mechanism doesn't involve a user consent before the user's traffic is failed over.
-Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over all the Event Grid resources from an affected region to the corresponding geo-paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's traffic is failed over.
+You can enable or disable this functionality by updating the configuration for your topic or domain. Select **Cross-Geo** option (default) to enable Microsoft-initiated failover and **Regional** to disable it. For detailed steps to configure this setting, see [Configure data residency](configure-custom-topic.md#configure-data-residency). If you opt for "regional", no data of any kind is replicated to another region by Microsoft, and you may define your own disaster recovery plan. For more information, see Build your own disaster recovery plan for Azure Event Grid topics and domains.
-If you have decided not to replicate any data to a paired region, you'll need to invest in some practices to build your own disaster recovery scenario and recover from a severe loss of application functionality using more than 2 regions. See [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) for more details. See [Build your own client-side disaster recovery for Azure Event Grid topics](custom-disaster-recovery-client-side.md) in case you want to implement client-side disaster recovery for Azure Event Grid topics.
-## RTO and RPO
+Here are a few reasons why you may want to disable the Microsoft-initiated failover feature:
-Disaster recovery is measured with two metrics:
+- Microsoft-initiated failover is done on a best-effort basis.
+- Some geo pairs may not meet your organization's data residency requirements.
-- Recovery Point Objective (RPO): the minutes or hours of data that may be lost.-- Recovery Time Objective (RTO): the minutes or hours the service may be down.
+In such cases, the recommended option is to [build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md). While this option requires a bit more effort, it enables faster failover, and you are in control of choosing secondary regions. If you want to implement client-side disaster recovery for Azure Event Grid topics, see [Build your own client-side disaster recovery for Azure Event Grid topics](custom-disaster-recovery-client-side.md).
-Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
-### Recovery point objective (RPO)
-- **Metadata RPO**: zero minutes. Anytime a resource is created in Event Grid, it's instantly replicated across regions. When a failover occurs, no metadata is lost.-- **Data RPO**: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.
+## RTO and RPO
-### Recovery time objective (RTO)
-- **Metadata RTO**: Though generally it happens much more quickly, within 60 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions.-- **Data RTO**: Like metadata, it generally happens much more quickly, however within 60 minutes, Event Grid will begin accepting new traffic after a regional failover.
+Disaster recovery is measured with two metrics:
-> [!IMPORTANT]
-> - There is no service level agreement (SLA) for server-side disaster recovery. If the paired region has no extra capacity to take on the additional traffic, Event Grid cannot initiate failover. Service level objectives are best-effort only.
-> - The cost for metadata GeoDR on Event Grid is: $0.
-> - Geo-disaster recovery isn't supported for partner topics.
+- Recovery Point Objective (RPO): the minutes or hours of data that may be lost.
+- Recovery Time Objective (RTO): the minutes or hours the service may be down.
-## Metrics
+Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
-Event Grid also provides [diagnostic logs schemas](diagnostic-logs.md) and [metrics](metrics.md) that helps you to identify a problem when there is a failure when publishing or delivering events. See the [troubleshoot](troubleshoot-issues.md) article in case you need with solving an issue in Azure Event Grid.
+### Recovery point objective (RPO)
+- **Metadata RPO**: zero minutes. For applicable resources, when a resource is created/updated/deleted, the resource definition is synchronously replicated to the geo-pair. When a failover occurs, no metadata is lost.
+- **Data RPO**: When a failover occurs, new data is processed from the paired region. As soon as the outage is mitigated for the affected region, the unprocessed events will be dispatched from there. If the region recovery required longer time than the [time-to-live](delivery-and-retry.md#dead-letter-events) value set on events, the data could get dropped. To mitigate this data loss, we recommend that you [set up a dead-letter destination](manage-event-delivery.md) for an event subscription. If the affected region is completely lost and non-recoverable, there will be some data loss. In the best-case scenario, the subscriber is keeping up with the publish rate and only a few seconds of data is lost. The worst-case scenario would be when the subscriber isn't actively processing events and with a max time to live of 24 hours, the data loss can be up to 24 hours.
-## More information
+### Recovery time objective (RTO)
+- **Metadata RTO**: Failover decision making is based on factors like available capacity in paired region and can last in the range of 60 minutes or more. Once failover is initiated, within 5 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions.
+- **Data RTO**: Same as above.
-You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](./event-grid-faq.yml).
+> [!IMPORTANT]
+> - In case of server-side disaster recovery, if the paired region has no extra capacity to take on the additional traffic, Event Grid cannot initiate failover. The recovery is done on a best-effort basis.
+> - The cost for using this feature is: $0.
+> - Geo-disaster recovery is not supported for partner namespaces and partner topics.
## Next steps--- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
+If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-erdirect-about.md
ExpressRoute Direct supports massive data ingestion scenarios into Azure storage
| **Subscribed Bandwidth**: 200 Gbps | **Subscribed Bandwidth**: 20 Gbps | | <ul><li>5 Gbps</li><li>10 Gbps</li><li>40 Gbps</li><li>100 Gbps</li></ul> | <ul><li>1 Gbps</li><li>2 Gbps</li><li>5 Gbps</li><li>10 Gbps</li></ul>
+> You are able to provision logical ExpressRoute circuits on top of your chosen ExpressRoute Direct resource (10G/100G) up to the Subscribed Bandwidth (20G/200G). E.g. You can provision two 10G ExpressRoute circuits within a single 10G ExpressRoute Direct resource (port pair). Today you must use Azure CLI or PowerShell when configuring circuits that over-subscribe the ExpressRoute Direct resource.
+ ## Technical Requirements * Microsoft Enterprise Edge Router (MSEE) Interfaces:
governance Machine Configuration Policy Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-policy-effects.md
Before you begin, it's a good idea to read the overview page for
> [!IMPORTANT] > The machine configuration extension is required for Azure virtual machines. To > deploy the extension at scale across all machines, assign the following policy
-> initiative: `Deploy prerequisites to enable machine configuration policies on
+> initiative: `Deploy prerequisites to enable guest configuration policies on
> virtual machines` > > To use machine configuration packages that apply configurations, Azure VM guest > configuration extension version **1.29.24** or later, > or Arc agent **1.10.0** or later, is required. >
-> Custom machine configuration policy definitions using **AuditIfNotExists** are
-> Generally Available, but definitions using **DeployIfNotExists** with guest
-> configuration are **in preview**.
+> Custom machine configuration policy definitions using **AuditIfNotExists**
+> as well as **DeployIfNotExists** are now Generally Available.
## How remediation (Set) is managed by machine configuration
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
For more information, see [Azure Policy guest configuration](../concepts/guest-c
||||| |Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |**Description**: This policy setting controls whether users that aren't Administrators can install print drivers on the system. The recommended state for this setting is: `Enabled`. **Note:** On August 10, 2021, Microsoft announced a [Point and Print Default Behavior Change](https://msrc-blog.microsoft.com/2021/08/10/point-and-print-default-behavior-change/) which modifies the default Point and Print driver installation and update behavior to require Administrator privileges. This is documented in [KB5005652�Manage new Point and Print default driver installation behavior (CVE-2021-34481)](https://support.microsoft.com/en-gb/topic/kb5005652-manage-new-point-and-print-default-driver-installation-behavior-cve-2021-34481-873642bf-2634-49c5-a23b-6d8e9a302872).<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |**Description**: This policy setting controls whether users that aren't Administrators can install print drivers on the system. The recommended state for this setting is: `Enabled`. **Note:** On August 10, 2021, Microsoft announced a [Point and Print Default Behavior Change](https://msrc-blog.microsoft.com/2021/08/10/point-and-print-default-behavior-change/) which modifies the default Point and Print driver installation and update behavior to require Administrator privileges. This is documented in [KB5005652-Manage new Point and Print default driver installation behavior (CVE-2021-34481)](https://support.microsoft.com/en-gb/topic/kb5005652-manage-new-point-and-print-default-driver-installation-behavior-cve-2021-34481-873642bf-2634-49c5-a23b-6d8e9a302872).<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - Domain member
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning | |Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning | |Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |**Description**: The policy setting allows programs that run on behalf of a user to impersonate that user (or another specified account) so that they can act on behalf of the user. If this user right is required for this kind of impersonation, an unauthorized user will not be able to convince a client to connect�for example, by remote procedure call (RPC) or named pipes�to a service that they have created to impersonate that client, which could elevate the unauthorized user's permissions to administrative or system levels. Services that are started by the Service Control Manager have the built-in Service group added by default to their access tokens. COM servers that are started by the COM infrastructure and configured to run under a specific account also have the Service group added to their access tokens. As a result, these processes are assigned this user right when they are started. Also, a user can impersonate an access token if any of the following conditions exist: - The access token that is being impersonated is for this user. - The user, in this logon session, logged on to the network with explicit credentials to create the access token. - The requested level is less than Impersonate, such as Anonymous or Identify. An attacker with the **Impersonate a client after authentication** user right could create a service, trick a client to make them connect to the service, and then impersonate that client to elevate the attacker's level of access to that of the client. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** This user right is considered a "sensitive privilege" for the purposes of auditing. **Note #2:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Service, Local Service, Network Service<br /><sub>(Policy)</sub> |Important |
+|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |**Description**: The policy setting allows programs that run on behalf of a user to impersonate that user (or another specified account) so that they can act on behalf of the user. If this user right is required for this kind of impersonation, an unauthorized user will not be able to convince a client to connect-for example, by remote procedure call (RPC) or named pipes-to a service that they have created to impersonate that client, which could elevate the unauthorized user's permissions to administrative or system levels. Services that are started by the Service Control Manager have the built-in Service group added by default to their access tokens. COM servers that are started by the COM infrastructure and configured to run under a specific account also have the Service group added to their access tokens. As a result, these processes are assigned this user right when they are started. Also, a user can impersonate an access token if any of the following conditions exist: - The access token that is being impersonated is for this user. - The user, in this logon session, logged on to the network with explicit credentials to create the access token. - The requested level is less than Impersonate, such as Anonymous or Identify. An attacker with the **Impersonate a client after authentication** user right could create a service, trick a client to make them connect to the service, and then impersonate that client to elevate the attacker's level of access to that of the client. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** This user right is considered a "sensitive privilege" for the purposes of auditing. **Note #2:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Service, Local Service, Network Service<br /><sub>(Policy)</sub> |Important |
## Windows Components
hdinsight Hdinsight Apps Install Hiveserver2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-hiveserver2.md
Previously updated : 08/12/2020 Last updated : 09/28/2022 # Scale HiveServer2 on Azure HDInsight Clusters for High Availability
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Throughput must be provisioned to ensure that sufficient system resources are av
## Update throughput
-To change this setting in the Azure portal, navigate to your Azure API for FHIR and open the Database blade. Next, change the Provisioned throughput to the desired value depending on your performance needs. You can change the value up to a maximum of 10,000 RU/s. If you need a higher value, contact Azure support.
+To change this setting in the Azure portal, navigate to your Azure API for FHIR and open the Database blade. Next, change the Provisioned throughput to the desired value depending on your performance needs. You can change the value up to a maximum of 100,000 RU/s. If you need a higher value, contact Azure support.
If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions create a multi-page response in which pagination is implemented by using continuation tokens.
Or you can deploy a fully managed Azure API for FHIR:
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
For example:
`POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/$validate`
-This request will create the new resource you're specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
+This request will first validate the resource. New resource you're specifying in the request will be created after validation.
+The server will always return an `OperationOutcome` as the result.
## Validate on resource CREATE or resource UPDATE
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Previously updated : 09/12/2022 Last updated : 09/27/2022
The two types of mappings are composed into a JSON document based on their type.
> [!TIP] > Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings; and export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-> [!NOTE]
+> [!IMPORTANT]
> Links to OSS projects on the GitHub website are for informational purposes only and do not constitute an endorsement or guarantee of any kind. You should review the information and licensing terms on the OSS projects on GitHub before using it. ## Device mappings overview
healthcare-apis Iot Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-git-projects.md
# Open-source projects
-Check out our open-source projects on GitHub that provide source code and instructions to deploy services for various uses with the MedTech service.
+Check out our open-source projects on GitHub that provide source code and instructions to deploy services for various uses with the MedTech service.
+
+> [!IMPORTANT]
+> Links to OSS projects on the GitHub website are for informational purposes only and do not constitute an endorsement or guarantee of any kind. You should review the information and licensing terms on the OSS projects on GitHub before using it.
## MedTech service GitHub projects
Learn how to deploy the MedTech service in the Azure portal
>[!div class="nextstepaction"] >[Deploy the MedTech service managed service](deploy-iot-connector-in-azure.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
To connect a device with an X.509 certificate to your application:
1. Add and verify an intermediate or root X.509 certificate in the enrollment group. 1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Install the leaf certificate on the device for it to use when it connects to your application.
+Each enrollment group should use a unique X.509 certificate. IoT Central does not support using the same X.509 certificate across multiple enrollment groups.
+ To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md) ### For testing purposes only
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-configure-rules.md
Title: Quickstart - Configure rules and actions in Azure IoT Central
-description: This quickstart shows you how to configure telemetry-based rules and actions in your IoT Central application.
+description: In this quickstart, you learn how to configure telemetry-based rules and actions in your IoT Central application.
Previously updated : 06/10/2022 Last updated : 09/26/2022 +
+# Customer intent: As a new user of IoT Central, I want to learn how to use rules to notify me when a specific condition is detected on one of my device.
# Quickstart: Configure rules and actions for your device in Azure IoT Central
-In this quickstart, you create an IoT Central rule that sends an email when someone turns your phone over.
+Get started with IoT Central rules. IoT Central rules let you automate actions that occur in response to specific conditions. The example in this quickstart uses accelerometer telemetry from the phone to trigger a rule when the phone is turned over.
+
+In this quickstart, you:
+
+- Create a rule that detects when a telemetry value passes a threshold.
+- Configure the rule to notify you by email.
+- Use the smartphone app to test the rule.
## Prerequisites
-Before you begin, you should complete the previous quickstart [Connect your first device](./quick-deploy-iot-central.md). It shows you how to create an Azure IoT Central application and connect the **IoT Plug and Play** smartphone app to it.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Complete the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
## Create a telemetry-based rule
After your testing is complete, disable the rule to stop receiving the notificat
## Next steps
-In this quickstart, you learned how to:
-
-* Create a telemetry-based rule
-* Add an action
+In this quickstart, you learned how to create a telemetry-based rule and add an action to it.
To learn more about integrating your IoT Central application with other services, see:
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Connect a device to an Azure IoT Central application | Microsoft Docs
-description: Quickstart - Connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device.
+description: In this quickstart, you learn how to connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device.
Previously updated : 06/22/2022 Last updated : 09/26/2022 +
+# Customer intent: As a new user of IoT Central, I want to learn how to get started with an IoT Central application and an IoT device.
# Quickstart - Use your smartphone as a device to send telemetry to an IoT Central application
-This quickstart shows you how to create an Azure IoT Central application and connect your first device. To get you started quickly, you install an app on your smartphone to act as the device. The app sends telemetry, reports properties, and responds to commands:
+Get started with an Azure IoT Central application and connect your first device. To get you started quickly, you install an app on your smartphone to act as the device. The app sends telemetry, reports properties, and responds to commands:
:::image type="content" source="media/quick-deploy-iot-central/overview.png" alt-text="Overview of quickstart scenario connecting a smartphone app to IoT Central." border="false":::
+In this quickstart, you:
+
+- Create an IoT Central application.
+- Register a new device in the application.
+- Connect a device to the application and view the telemetry it sends.
+- Control the device from your application.
+ ## Prerequisites
-An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-> [!TIP]
-> You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md)
+ You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md)
-An Android or iOS smartphone on which you're able to install a free app from one of the official app stores.
+- An Android or iOS smartphone on which you're able to install a free app from one of the official app stores.
## Create an application
IoT Central provides various industry-focused application templates to help you
:::image type="content" source="media/quick-deploy-iot-central/iot-central-create-new-application.png" alt-text="Build your IoT application page":::
+ If you're prompted to sign in, use the Microsoft account associated with your Azure subscription.
+ 1. On the **New application** page, make sure that **Custom application** is selected under the **Application template**. 1. Azure IoT Central automatically suggests an **Application name** based on the application template you've selected. Enter your own application name such as *Contoso quickstart app*.
To register your device:
1. On the **Create a new device** page, accept the defaults, and then select **Create**.
-1. In the list of devices, click the device name:
+1. In the list of devices, click on the device name:
:::image type="content" source="media/quick-deploy-iot-central/device-name.png" alt-text="A screenshot that shows the highlighted device name that you can select.":::
To view the telemetry from the smartphone app in IoT Central:
1. In IoT Central, navigate to the **Devices** page.
-1. In the list of devices, click on your device name, then select **Overview**:
+1. In the list of devices, click on the device name, then select **Overview**:
:::image type="content" source="media/quick-deploy-iot-central/iot-central-telemetry.png" alt-text="Screenshot of the overview page with telemetry plots.":::
To see the acknowledgment from the smartphone app, select **command history**.
## Next steps
-In this quickstart, you created an IoT Central application and connected device that sends telemetry. In this quickstart, you used a smartphone app as the IoT device that connects to IoT Central. Here's the suggested next step to continue learning about IoT Central:
+In this quickstart, you created an IoT Central application and connected device that sends telemetry. Then you used a smartphone app as the IoT device that connects to IoT Central. Here's the suggested next step to continue learning about IoT Central:
> [!div class="nextstepaction"] > [Add a rule to your IoT Central application](./quick-configure-rules.md)
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-export-data.md
Title: Quickstart - Export data from Azure IoT Central
-description: Quickstart - Learn how to use the data export feature in IoT Central to integrate with other cloud services.
+description: In this quickstart, you learn how to use the data export feature in IoT Central to integrate with other cloud services.
Previously updated : 02/18/2022 Last updated : 09/26/2022 ms.devlang: azurecli+
+# Customer intent: As a new user of IoT Central, I want to learn how to use the data export feature so that I can integrate my IoT Central application with other backend services.
# Quickstart: Export data from an IoT Central application
-This quickstart shows you how to continuously export data from your Azure IoT Central application to another cloud service. To get you set up quickly, this quickstart uses [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), a fully managed data analytics service for real-time analysis. Azure Data Explorer lets you store, query, and process the telemetry from devices such as the **IoT Plug and Play** smartphone app.
+Get started with IoT Central data export to integrate your IoT Central application with another cloud service such as Azure Data Explorer. Azure Data Explorer lets you store, query, and process the telemetry from devices such as the **IoT Plug and Play** smartphone app.
In this quickstart, you: -- Use the data export feature in IoT Central to export the telemetry sent by the smartphone app to an Azure Data Explorer database.
+- Use the data export feature in IoT Central to the telemetry from the smartphone app to an Azure Data Explorer database.
- Use Azure Data Explorer to run queries on the telemetry.
+Completing this quickstart incurs a small cost in your Azure account for the Azure Data Explorer instance. The first two devices in your IoT Central application are free.
+ ## Prerequisites -- Before you begin, you should complete the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). The second quickstart, [Configure rules and actions for your device](quick-configure-rules.md), is optional.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Complete the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). The second quickstart, [Configure rules and actions for your device](quick-configure-rules.md), is optional.
- You need the IoT Central application *URL prefix* that you chose in the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). [!INCLUDE [azure-cli-prepare-your-environment-no-header](../../../includes/azure-cli-prepare-your-environment-no-header.md)] ## Install Azure services
-Before you can export data from your IoT Central application, you need an Azure Data Explorer cluster and database. In this quickstart, you use the bash environment in the [Azure Cloud Shell](https://shell.azure.com) to create and configure them.
+Before you can export data from your IoT Central application, you need an Azure Data Explorer cluster and database. In this quickstart, you run a bash script in the [Azure Cloud Shell](https://shell.azure.com) to create and configure them.
-Run the following script in the Azure Cloud Shell. Replace the `clustername` value with a unique name for your cluster before you run the script. The cluster name can contain only lowercase letters and numbers. Replace the `centralurlprefix` value with the URL prefix you chose in the first quickstart:
+The script completes the following steps:
-> [!IMPORTANT]
-> The script can take 20 to 30 minutes to run.
+- Prompts you to sign in to your Azure subscription so that it can generate a bearer token to authenticate the REST API calls.
+- Creates an Azure Data Explorer cluster and database.
+- Creates a managed identity for your IoT Central application.
+- Configures the managed identity with permission to access the Azure Data Explorer database.
+- Adds a table to the database to store the incoming telemetry from IoT Central.
+
+Run the following commands to download the script to your Azure Cloud Shell environment:
```azurecli
-# The cluster name can contain only lowercase letters and numbers.
-# It must contain from 4 to 22 characters.
-clustername="<A unique name for your cluster>"
-
-centralurlprefix="<The URL prefix of your IoT Central application>"
-
-databasename="phonedata"
-location="eastus"
-resourcegroup="IoTCentralExportData"
-
-az extension add -n kusto
-
-# Create a resource group for the Azure Data Explorer cluster
-az group create --location $location \
- --name $resourcegroup
-
-# Create the Azure Data Explorer cluster
-# This command takes at least 10 minutes to run
-az kusto cluster create --cluster-name $clustername \
- --sku name="Standard_D11_v2" tier="Standard" \
- --enable-streaming-ingest=true \
- --enable-auto-stop=true \
- --resource-group $resourcegroup --location $location
-
-# Create a database in the cluster
-az kusto database create --cluster-name $clustername \
- --database-name $databasename \
- --read-write-database location=$location soft-delete-period=P365D hot-cache-period=P31D \
- --resource-group $resourcegroup
-
-# Create and assign a managed identity to use
-# when authenticating from IoT Central.
-# This assumes your IoT Central was created in the default
-# IOTC resource group.
-MI_JSON=$(az iot central app identity assign --name $centralurlprefix \
- --resource-group IOTC --system-assigned)
-
-## Assign the managed identity permissions to use the database.
-az kusto database-principal-assignment create --cluster-name $clustername \
- --database-name $databasename \
- --principal-id $(jq -r .principalId <<< $MI_JSON) \
- --principal-assignment-name $centralurlprefix \
- --resource-group $resourcegroup \
- --principal-type App \
- --tenant-id $(jq -r .tenantId <<< $MI_JSON) \
- --role Admin
-
-echo "Azure Data Explorer URL: $(az kusto cluster show --name $clustername --resource-group $resourcegroup --query uri -o tsv)"
+wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/quickstart-cde/createADX.sh
+chmod u+x createADX.sh
```
-Make a note of the **Azure Data Explorer URL**. You use this value later in the quickstart.
-
-## Configure the database
-
-To add a table in the database to store the accelerometer data from the **IoT Plug and Play** smartphone app:
-
-1. Use the **Azure Data Explorer URL** from the previous section to navigate to your Azure Data Explorer environment.
-
-1. Expand the cluster node and select the **phonedata** database. The cope of the query window changes to `Scope:yourclustername.eastus/phonedata`.
-
-1. Paste the following Kusto script into the query editor and select **Run**:
-
- ```kusto
- .create table acceleration (
- EnqueuedTime:datetime,
- Device:string,
- X:real,
- Y:real,
- Z:real
- );
- ```
-
- The result looks like the following screenshot:
+Use the following command to run the script:
- :::image type="content" source="media/quick-export-data/azure-data-explorer-create-table.png" alt-text="Screenshot that shows the results of creating the table in Azure Data Explorer.":::
+- Replace `CLUSTER_NAME` with a unique name for your Azure Data Explorer cluster. The cluster name can contain only lowercase letters and numbers. The length of the cluster name must be between 4 and 22 characters.
+- Replace `CENTRAL_URL_PREFIX` with URL prefix you chose in the first quickstart for your IoT Central application.
+- When prompted, follow the instructions to sign in to your account. It's necessary for the script to sign in because it generates a bearer token to authenticate a REST API call.
-1. In the Azure Data Explorer, open a new tab and paste in the following Kusto script. The script enables streaming ingress for the **acceleration** table:
+```azurecli
+./createADX.sh CLUSTER_NAME CENTRAL_URL_PREFIX
+```
- ```kusto
- .alter table acceleration policy streamingingestion enable;
- ```
+> [!IMPORTANT]
+> This script can take 20 to 30 minutes to run.
-Keep the Azure Data Explorer page open in your browser. After you configure the data export in your IoT Central application, you'll run a query to view the accelerometer telemetry stored in the **acceleration** table.
+Make a note of the **Azure Data Explorer URL** output by the script. You use this value later in the quickstart.
## Configure data export
Wait until the export status shows **Healthy**:
## Query exported data
-In Azure Data Explorer, open a new tab and paste in the following Kusto query and then select **Run** to plot the accelerometer telemetry:
+To query the exported telemetry:
-```kusto
-['acceleration']
- | project EnqueuedTime, Device, X, Y, Z
- | render timechart
-```
+1. Use the **Azure Data Explorer URL** output by the script you ran previously to navigate to your Azure Data Explorer environment.
+
+1. Expand the cluster node and select the **phonedata** database. The scope of the query window changes to `Scope:yourclustername.eastus/phonedata`.
+
+1. In Azure Data Explorer, open a new tab and paste in the following Kusto query and then select **Run** to plot the accelerometer telemetry:
+
+ ```kusto
+ ['acceleration']
+ | project EnqueuedTime, Device, X, Y, Z
+ | render timechart
+ ```
You may need to wait for several minutes to collect enough data. Try holding your phone in different orientations to see the telemetry values change:
You may need to wait for several minutes to collect enough data. Try holding you
[!INCLUDE [iot-central-clean-up-resources](../../../includes/iot-central-clean-up-resources.md)]
-To remove the Azure Data Explorer instance from your subscription and avoid being billed unnecessarily, delete the **IoTCentralExportData** resource group from the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) or run the following command in the Azure Cloud Shell:
+To remove the Azure Data Explorer instance from your subscription and avoid being billed unnecessarily, delete the **IoTCentralExportData-rg** resource group from the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) or run the following command in the Azure Cloud Shell:
```azurecli
-az group delete --name IoTCentralExportData
+az group delete --name IoTCentralExportData-rg
``` ## Next steps
In this quickstart, you learned how to continuously export data from IoT Central
Now that you know now to export your data, the suggested next step is to: > [!div class="nextstepaction"]
-> [Build and manage a device template](howto-set-up-template.md).
+> [Create and connect a device](tutorial-connect-device.md).
iot-develop Concepts Azure Rtos Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md
Support current TLS versions:
**Azure RTOS**: TLS 1.2 is enabled by default. TLS 1.3 support must be explicitly enabled in Azure RTOS because TLS 1.2 is still the de-facto standard.
+Also ensure the below corresponding NetX Secure configurations are set. Please refer to the [list of configurations](https://learn.microsoft.com/azure/rtos/netx-duo/netx-secure-tls/chapter2#configuration-options) for details.
+
+```c
+/* Enables secure session renegotiation extension */
+#define NX_SECURE_TLS_DISABLE_SECURE_RENEGOTIATION 0
+
+/* Disables protocol version downgrade for TLS client. */
+#define NX_SECURE_TLS_DISABLE_PROTOCOL_VERSION_DOWNGRADE
+```
+
+When setting up NetX TLS, use [`nx_secure_tls_session_time_function_set()`](https://learn.microsoft.com/azure/rtos/netx-duo/netx-secure-tls/chapter4#nx_secure_tls_session_time_function_set) to set a timing function that returns the current GMT in UNIX 32-bit format to enable checking of the certification expirations.
+ **Application**: To use TLS with cloud services, a certificate is required. The certificate must be managed by the application. ### Use X.509 certificates for TLS authentication
Use the strongest cryptography and cipher suites available for TLS. You need the
**Azure RTOS**: Azure RTOS TLS provides hardware drivers for select devices that support cryptography in hardware. For routines not supported in hardware, the [Azure RTOS cryptography library](/azure/rtos/netx/netx-crypto/chapter1) is designed specifically for embedded systems. A FIPS 140-2 certified library that uses the same code base is also available.
-**Application**: Applications that use TLS should choose cipher suites that use hardware-based cryptography when it's available. They should also use the strongest keys available.
+**Application**: Applications that use TLS should choose cipher suites that use hardware-based cryptography when it's available. They should also use the strongest keys available. Note the following TLS Cipher Suites, supported in TLS 1.2, do not provide forward secrecy:
+
+- **TLS_RSA_WITH_AES_128_CBC_SHA256**
+- **TLS_RSA_WITH_AES_256_CBC_SHA256**
+
+Consider using **TLS_RSA_WITH_AES_128_GCM_SHA256** if available.
+
+SHA1 (128-bit) is no longer considered cryptographically secure, avoid using cipher suites that engages SHA1 (such as **TLS_RSA_WITH_AES_128_CBC_SHA**) if possible.
+
+AES/CBC mode is susceptible to Lucky-13 attacks. Application shall use AES-GCM (such as **TLS_RSA_WITH_AES_128_GCM_SHA256**).
### TLS mutual certificate authentication
Whether you're using Azure RTOS in combination with Azure Sphere or not, the Mic
- [Common Criteria](https://www.commoncriteriaportal.org/) is an international agreement that provides standardized guidelines and an authorized laboratory program to evaluate products for IT security. Certification provides a level of confidence in the security posture of applications using devices that were evaluated by using the program guidelines. - [Security Evaluation Standard for IoT Platforms (SESIP)](https://globalplatform.org/sesip/) is a standardized methodology for evaluating the security of connected IoT products and components. - [ISO 27000 family](https://www.iso.org/isoiec-27001-information-security.html) is a collection of standards regarding the management and security of information assets. The standards provide baseline guarantees about the security of digital information in certified products.-- [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
+- [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Use these PowerShell scripts to help with upgrading from Basic to Standard SKU:
- [Upgrading a basic to standard public load balancer](upgrade-basic-standard.md) - [Upgrade from Basic Internal to Standard Internal](upgrade-basicInternal-standard.md) - [Upgrade an internal basic load balancer - Outbound connections required](upgrade-internalbasic-to-publicstandard.md)
+- [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md)
## Next Steps For guidance on upgrading basic Public IP addresses to Standard SKUs, see: > [!div class="nextstepaction"]
-> [Upgrading a Basic Public IP to Standard Public IP - Guidance](../virtual-network/ip-services/public-ip-basic-upgrade-guidance.md)
+> [Upgrading a Basic Public IP to Standard Public IP - Guidance](../virtual-network/ip-services/public-ip-basic-upgrade-guidance.md)
load-balancer Load Balancer Migrate Nic To Ip Based Backend Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-migrate-nic-to-ip-based-backend-pools.md
- Title: Migrating from NIC to IP-based backend pools-
-description: This article covers migrating a load balancer from NIC-based backend pools to IP-based backend pools for virtual machines and virtual machine scale sets.
----- Previously updated : 09/22/2022---
-# Migrating from NIC to IP-based backend pools
-
-In this article, you'll learn how to migrate a load balancer with NIC-based backend pools to use IP-based backend pools with virtual machines and virtual machine scale sets
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing standard Load Balancer in the subscription, with NIC-based backend pools.-
-## What is IP-based Load Balancer
-
-IP-based load balancers reference the private IP address of the resource in the backend pool rather than the resourceΓÇÖs NIC. IP-based load balancers enable the pre-allocation of private IP addresses in a backend pool, without having to create the backend resources themselves in advance.
-
-## Migrating NIC-based virtual machine backend pools too IP-based
-
-To migrate a load balancer with NIC-based backend pools to IP-based with VMs (not virtual machine scale sets instances) in the backend pool, you can utilize the following migration REST API.
-
-```http
-
-POST URL: https://management.azure.com/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.Network/loadBalancers/{lbName}/migrateToIpBased?api-version=2022-01-01
-
-```
-### URI Parameters
-
-| Name | In | Required | Type | Description |
-|- | - | - | - | - |
-|Sub | Path | True | String | The subscription credentials which uniquely identify the Microsoft Azure subscription. The subscription ID forms part of the URI for every service call. |
-| Rg | Path | True | String | The name of the resource group. |
-| LbName | Path | True | String | The name of the load balancer. |
-| api-version | Query | True | String | Client API Version |
-
-### Request Body
-
-| Name | Type | Description |
-| - | - | - |
-| Backend Pools | String | A list of backend pools to migrate. Note if request body is specified, all backend pools will be migrated. |
-
-A full example using the CLI to migrate all backend pools in a load balancer is shown here:
-
-```azurecli-interactive
-
-az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥
-
-```
--
-A full example using the CLI to migrate a set of specific backend pool in a load balancer is shown below. To migrate a specific group of backend pools from NIC-based to IP-based, you can pass in a list of the backend pool names in the request body:
-
-```azurecli-interactive
-
-az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥
---b {\"Pools\":[\"MyBackendPool\"]}
-```
-## Upgrading LB with virtual machine scale sets attached
-
-To upgrade a NIC-based load balancer to IP based load balancer with virtual machine scale sets in the backend pool, follow the following steps:
-1. Configure the upgrade policy of the virtual machine scale sets to be automatic. If the upgrade policy isn't set to automatic, all virtual machine scale sets instances must be upgraded after calling the migration API.
-1. Using the AzureΓÇÖs migration REST API, upgrade the NIC based LB to an IP based LB. If a manual upgrade policy is in place, upgrade all VMs in the virtual machine scale sets before step 3.
-1. Remove the reference of the load balancer from the network profile of the virtual machine scale sets, and update the VM instances to reflect the changes.
-
-A full example using the CLI is shown here:
-
-```azurecli-interactive
-
-az rest ΓÇôm post ΓÇôu ΓÇ£https://management.azure.com/subscriptions/MySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLB/migrateToIpBased?api-version=2022-01-01ΓÇ¥
-
-az virtual machine scale sets update --resource-group MyResourceGroup --name MyVMSS --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools
-
-```
-
-## Next Steps
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
### Change allocation method of the public IP address to static
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
This article introduces a PowerShell script that creates a Standard Load Balance
* The Basic Load Balancer needs to be in the same resource group as the backend VMs and NICs. * If the Standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region. * If your Load Balancer does not have any frontend IP configuration or backend pool, you are likely to hit an error running the script. Make sure they are not empty.
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
## Change IP allocation method to Static for frontend IP Configuration (Ignore this step if it's already static)
Yes it migrates traffic. If you would like to migrate traffic personally, use [t
## Next steps
-[Learn about Standard Load Balancer](load-balancer-overview.md)
+[Learn about Standard Load Balancer](load-balancer-overview.md)
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
## Download the script
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration
Last updated 09/06/2022-+ # Create an integration workflow with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
ms.suite: integration Previously updated : 08/20/2022+ Last updated : 09/08/2022 # As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
logic-apps Designer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/designer-overview.md
Last updated 08/20/2022
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-When you work with Azure Logic Apps in the Azure portal, you can edit your [*workflows*](logic-apps-overview.md#workflow) visually or programmatically. After you open a [*logic app* resource](logic-apps-overview.md#logic-app) in the portal, on the resource menu under **Developer**, you can select between [**Code** view](#code-view) and **Designer** view. When you want to visually develop, edit, and run your workflow, select the designer view. You can switch between the designer view and code view at any time.
+When you work with Azure Logic Apps in the Azure portal, you can edit your [*workflows*](logic-apps-overview.md#logic-app-concepts) visually or programmatically. After you open a [*logic app* resource](logic-apps-overview.md#logic-app-concepts) in the portal, on the resource menu under **Developer**, you can select between [**Code** view](#code-view) and **Designer** view. When you want to visually develop, edit, and run your workflow, select the designer view. You can switch between the designer view and code view at any time.
> [!IMPORTANT] > Currently, the latest version of the designer is available only for *Standard* logic app resources, which run in the
When you select the **Designer** view, your workflow opens in the workflow desig
The latest workflow designer offers a new experience with noteworthy features and benefits, for example: -- A new layout engine that supports more complicated workflows.
+- A new layout engine that supports more complicated workflows.
- You can create and view complicated workflows cleanly and easily, thanks to the new layout engine, a more compact canvas, and updates to the card-based layout. - Add and edit steps using panels separate from the workflow layout. This change gives you a cleaner and clearer canvas to view your workflow layout. For more information, review [Add steps to workflows](#add-steps-to-workflows). - Move between steps in your workflow on the designer using keyboard navigation.
The latest workflow designer offers a new experience with noteworthy features an
## Add steps to workflows
-The workflow designer provides a visual way to add, edit, and delete steps in your workflow. As the first step in your workflow, always add a [*trigger*](logic-apps-overview.md#trigger). Then, complete your workflow by adding one or more [*actions*](logic-apps-overview.md#action).
+The workflow designer provides a visual way to add, edit, and delete steps in your workflow. As the first step in your workflow, always add a [*trigger*](logic-apps-overview.md#logic-app-concepts). Then, complete your workflow by adding one or more [*actions*](logic-apps-overview.md#logic-app-concepts).
To add either the trigger or an action your workflow, follow these steps: 1. Open your workflow in the designer.
-1. On the designer, select **Choose an operation**, which opens a pane named either **Add a trigger** or **Add an action**.
+
+1. On the designer, select **Choose an operation**, which opens a pane named either **Add a trigger** or **Add an action**.
+ 1. In the opened pane, find an operation by filtering the list in the following ways:
- 1. Enter a service, connector, or category in the search bar to show related operations. For example, `Azure Cosmos DB` or `Data Operations`.
- 1. If you know the specific operation you want to use, enter the name in the search bar. For example, `Call an Azure function` or `When an HTTP request is received`.
- 1. Select the **Built-in** tab to only show categories of [*built-in operations*](logic-apps-overview.md#built-in-operations). Or, select the **Azure** tab to show other categories of operations available through Azure.
- 1. You can view only triggers or actions by selecting the **Triggers** or **Actions** tab. However, you can only add a trigger as the first step and an action as a following step. Based on the operation category, only triggers or actions might be available.
- :::image type="content" source="./media/designer-overview/designer-add-operation.png" alt-text="Screenshot of the Logic Apps designer in the Azure portal, showing a workflow being edited to add a new operation." lightbox="./media/designer-overview/designer-add-operation.png":::
-1. Select the operation you want to use.
- :::image type="content" source="./media/designer-overview/designer-filter-operations.png" alt-text="Screenshot of the Logic Apps designer, showing a pane of possible operations that can be filtered by service or name." lightbox="./media/designer-overview/designer-filter-operations.png":::
+
+ 1. Enter a service, connector, or category in the search bar to show related operations. For example, `Azure Cosmos DB` or `Data Operations`.
+
+ 1. If you know the specific operation you want to use, enter the name in the search bar. For example, `Call an Azure function` or `When an HTTP request is received`.
+
+ 1. Select the **Built-in** tab to only show categories of [*built-in operations*](logic-apps-overview.md#logic-app-concepts). Or, select the **Azure** tab to show other categories of operations available through Azure.
+
+ 1. You can view only triggers or actions by selecting the **Triggers** or **Actions** tab. However, you can only add a trigger as the first step and an action as a following step. Based on the operation category, only triggers or actions might be available.
+
+ :::image type="content" source="./media/designer-overview/designer-add-operation.png" alt-text="Screenshot of the Logic Apps designer in the Azure portal, showing a workflow being edited to add a new operation." lightbox="./media/designer-overview/designer-add-operation.png":::
+
+1. Select the operation you want to use.
+
+ :::image type="content" source="./media/designer-overview/designer-filter-operations.png" alt-text="Screenshot of the Logic Apps designer, showing a pane of possible operations that can be filtered by service or name." lightbox="./media/designer-overview/designer-filter-operations.png":::
+ 1. Configure your trigger or action as needed.
- 1. Mandatory fields have a red asterisk (&ast;) in front of the name.
- 1. Some triggers and actions might require you to create a connection to another service. You might need to sign into an account, or enter credentials for a service. For example, if you want to use the Office 365 Outlook connector to send an email, you need to authorize your Outlook email account.
- 1. Some triggers and actions use dynamic content, where you can select variables instead of entering information or expressions.
-1. Select **Save** in the toolbar to save your changes. This step also verifies that your workflow is valid.
+
+ 1. Mandatory fields have a red asterisk (&ast;) in front of the name.
+
+ 1. Some triggers and actions might require you to create a connection to another service. You might need to sign into an account, or enter credentials for a service. For example, if you want to use the Office 365 Outlook connector to send an email, you need to authorize your Outlook email account.
+
+ 1. Some triggers and actions use dynamic content, where you can select variables instead of entering information or expressions.
+
+1. Select **Save** in the toolbar to save your changes. This step also verifies that your workflow is valid.
## Code view
The **Code** view allows you to directly edit the workflow definition file in JS
:::image type="content" source="./media/designer-overview/code-view.png" alt-text="Screenshot of a Logic Apps workflow in Code view, showing the JSON workflow definition being edited in the Azure portal."::: - ## Next steps > [!div class="nextstepaction"]
logic-apps Logic Apps Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-functions.md
This article shows how to call an Azure function from a logic app workflow. To r
* If you have an OpenAPI definition for your function, the workflow designer gives you a richer experience when your work with function parameters. Before your logic app workflow can find and access functions that have OpenAPI definitions, [set up your function app by following these later steps](#function-swagger).
-* Either a [Consumption or Standard](logic-apps-overview.md#resource-type-and-host-environment-differences) logic app resource and workflow where you want to use the function.
+* Either a [Consumption or Standard](logic-apps-overview.md#resource-environment-differences) logic app resource and workflow where you want to use the function.
Before you can add an action that runs a function in your workflow, the workflow must start with a trigger as the first step. If you're new to logic app workflows, review [What is Azure Logic Apps](logic-apps-overview.md) and [Quickstart: Create your first logic app workflow](quickstart-create-first-logic-app-workflow.md).
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
ms.suite: integration Previously updated : 08/20/2022- Last updated : 09/07/2022+ ms.devlang: azurecli
logic-apps Logic Apps Diagnosing Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-diagnosing-failures.md
ms.suite: integration + Last updated 09/02/2022
logic-apps Logic Apps Enterprise Integration Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-agreements.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
+ * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
> [!NOTE] > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
logic-apps Logic Apps Enterprise Integration B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-b2b.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
+ * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), or [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), or [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
> [!NOTE] > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
logic-apps Logic Apps Enterprise Integration Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-certificates.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource.
- * If you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you have to [link your integration account to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use your artifacts in your workflow.
+ * If you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), you have to [link your integration account to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use your artifacts in your workflow.
To create and add certificates for use in **Logic App (Consumption)** workflows, you don't need a logic app resource yet. However, when you're ready to use those certificates in your workflows, your logic app resource requires a linked integration account that stores those certificates.
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
> [!NOTE] > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
This article shows how to add the **Flat File** encoding and decoding actions to
For more information, review the following documentation:
-* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-type-and-host-environment-differences)
+* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-environment-differences)
* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in) * [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md) * [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
For more information, review the following documentation:
* [Perform data operations in Azure Logic Apps](logic-apps-perform-data-operations.md) * [Liquid open-source template language](https://shopify.github.io/liquid/)
-* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-type-and-host-environment-differences)
+* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-environment-differences)
* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in) * [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md) * [Managed or Azure-hosted connectors overview for Azure Logic Apps](../connectors/managed.md) and [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
This article shows how to add a map to your integration account. If you're worki
* Edit your maps or payloads to reduce memory consumption.
- * Create [Standard logic app workflows](logic-apps-overview.md#resource-type-and-host-environment-differences) instead.
+ * Create [Standard logic app workflows](logic-apps-overview.md#resource-environment-differences) instead.
These workflows run in single-tenant Azure Logic Apps, which offers dedicated and flexible options for compute and memory resources. However, Standard workflows support only XSLT 1.0 and don't support referencing external assemblies from maps.
logic-apps Logic Apps Enterprise Integration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-overview.md
After you create an integration account and add your artifacts, you can start bu
> logic app resource and use those artifacts across multiple workflows within the *same logic app resource*. > You still need an integration account to store other artifacts such as partners and agreements, but linking > is no longer necessary, so this capability doesn't exist. For more information about these resource types, review
-> [What is Azure Logic Apps - Resource type and host environments](logic-apps-overview.md#resource-type-and-host-environment-differences).
+> [What is Azure Logic Apps - Resource type and host environments](logic-apps-overview.md#resource-environment-differences).
The following diagram shows the high-level steps to start building B2B logic app workflows:
logic-apps Logic Apps Enterprise Integration Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-partners.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
+ * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use artifacts in your workflow.
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account doesn't need a link to your logic app resource but is still required to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
> [!NOTE] > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
logic-apps Logic Apps Enterprise Integration Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-transform.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource where you plan to use the **Transform XML** action.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires the following items:
+ * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires the following items:
* The [map](logic-apps-enterprise-integration-maps.md) to use for transforming XML content. * A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store maps in your integration account. Instead, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), you don't store maps in your integration account. Instead, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
logic-apps Logic Apps Enterprise Integration Xml Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-xml-validation.md
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource where you plan to use the **XML Validation*** action.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), your integration account requires the following items:
+ * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires the following items:
* The [schema](logic-apps-enterprise-integration-schemas.md) to use for validating XML content. * A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
+ * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), you don't store schemas in your integration account. Instead, you can [directly add schemas to your logic app resource](logic-apps-enterprise-integration-schemas.md) using either the Azure portal or Visual Studio Code. You can then use these schemas across multiple workflows within the *same logic app resource*.
You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
+ Last updated 09/07/2022
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
Previously updated : 11/19/2020+ Last updated : 09/22/2022 # Call, trigger, or nest logic apps by using HTTPS endpoints in Azure Logic Apps + To make your logic app callable through a URL and able to receive inbound requests from other services, you can natively expose a synchronous HTTPS endpoint by using a request-based trigger on your logic app. With this capability, you can call your logic app from other logic apps and create a pattern of callable endpoints. To set up a callable endpoint for handling inbound calls, you can use any of these trigger types: * [Request](../connectors/connectors-native-reqres.md)
To make your logic app callable through a URL and able to receive inbound reques
This article shows how to create a callable endpoint on your logic app by using the Request trigger and call that endpoint from another logic app. All principles apply identically to the other trigger types that you can use to receive inbound requests. - For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests). ## Prerequisites
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
Title: Overview
-description: Azure Logic Apps is a cloud platform for creating and running automated workflows that integrate apps, data, services, and systems using little to no code. Workflows can run in a multi-tenant, single-tenant, or dedicated environment.
+description: Create and run automated workflows so that you can integrate apps, data, services, and systems using little to no code. In Azure, your workflows can run in a multi-tenant, single-tenant, or dedicated environment.
ms.suite: integration - Previously updated : 09/22/2022+ Last updated : 09/23/2022 # What is Azure Logic Apps?
-Azure Logic Apps is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. As a member of [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/), Azure Logic Apps simplifies the way that you connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments. Learn more about [Azure Logic Apps on the Azure website](https://azure.microsoft.com/services/logic-apps).
+Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. By using the visual designer and selecting from prebuilt operations, you can quickly build a workflow that integrates and manages your apps, data, services, and systems.
-The following list describes just a few example tasks, business processes, and workloads that you can automate using Azure Logic Apps:
+Azure Logic Apps simplifies the way that you connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments and provides low-code-no-code tools for you to develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios.
+
+This list describes just a few example tasks, business processes, and workloads that you can automate using Azure Logic Apps:
* Schedule and send email notifications using Office 365 when a specific event happens, for example, a new file is uploaded.
The following list describes just a few example tasks, business processes, and w
* Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.
-Based on the logic app resource type that you choose, your logic app workflows can run in either multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, a dedicated integration service environment, or an App Service Environment (v3). With the last three environments, your workflows can access an Azure virtual network more easily. You can also run logic app workflows in containers when you create single tenant-based workflows using Azure Arc enabled Logic Apps.
-
-To communicate with any service endpoint, run your own code, organize your workflow, or manipulate data, you can use [*built-in* connector operations](#built-in-operations) in your workflow. These operations run natively on the Azure Logic Apps runtime. To securely access and run operations on data and entities in many services such as Azure, Microsoft, and other web apps or on-premises systems, you can use [*managed* (Azure-hosted) connector operations](#managed-connector) in your workflows. Choose from [many hundreds of connectors in an abundant and growing Azure ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors), for example:
-
-* Azure services such as Blob Storage and Service Bus
-
-* Office 365 services such as Outlook, Excel, and SharePoint
-
-* Database servers such as SQL and Oracle
-
-* Enterprise systems such as SAP and IBM MQ
-
-* File shares such as FTP and SFTP
-
-For B2B integration scenarios, Azure Logic Apps includes capabilities from [BizTalk Server](/biztalk/core/introducing-biztalk-server). To define business-to-business (B2B) artifacts, you create [*integration account*](#integration-account) where you store these artifacts. After you link this account to your logic app, your workflows can use these B2B artifacts and exchange messages that comply with Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards.
-
-For more information, review the following documentation:
-
-* [Connectors overview for Azure Logic Apps](../connectors/apis-list.md)
-
-* [Managed connectors](../connectors/managed.md)
-
-* [Built-in connectors](../connectors/built-in.md)
-
-* [B2B enterprise integration solutions with Azure Logic Apps](logic-apps-enterprise-integration-overview.md)
-
-* [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
-* [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
+If you're ready to try creating your first logic app workflow, see [Get started](#get-started).
> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
+For more information, see [Azure Logic Apps on the Azure website](https://azure.microsoft.com/services/logic-apps) and other [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/).
+ <a name="logic-app-concepts"></a> ## Key terms
-The following list briefly defines terms and core concepts in Azure Logic Apps.
-
-### Logic app
-
-A *logic app* is the Azure resource you create when you want to build a workflow. There are [multiple logic app resource types that run in different environments](#resource-environment-differences).
-
-### Workflow
+The following table briefly defines core terminology and concepts in Azure Logic Apps.
-A *workflow* is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.
+| Term | Description |
+||-|
+| **Logic app** | The Azure resource you create when you want to build a workflow. There are [multiple logic app resource types that run in different environments](#resource-environment-differences). |
+| **Workflow** | A series of steps that defines a task, business process, or workload. Each workflow starts with a single trigger, after which you must add one or more actions. |
+| **Trigger** | Always the first step in any workflow and specifies the condition for running any further steps in that workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account. |
+| **Action** | Each subsequent step in a workflow that follows after the trigger. Every action runs some operation in a workflow. |
+| **Built-in connector** | This connector type provides operations that run natively in Azure Logic Apps. For example, built-in operations provide ways for you to control your workflow's schedule or structure, run your own code, manage and manipulate data, send or receive requests to an endpoint, and complete other tasks in your workflow. <br><br>For example, you can start almost any workflow on a schedule when you use the Recurrence trigger. Or, you can have your workflow wait until called when you use the Request trigger. Such operations don't usually require that you create a connection from your workflow. <br><br>While most built-in operations aren't associated with any service or system, some built-in operations are available for specific services, such as Azure Functions or Azure App Service. For more information and examples, review [Built-in connectors for Azure Logic Apps](../connectors/built-in.md). |
+| **Managed connector** | This connector type is a prebuilt proxy or wrapper for a REST API that you can use to access a specific app, data, service, or system. Before you can use most managed connectors, you must first create a connection from your workflow and authenticate your identity. Managed connectors are published, hosted, and maintained by Microsoft. <br><br>For example, you can start your workflow with a trigger or run an action that works with a service such as Office 365, Salesforce, or file servers. For more information, review [Managed connectors for Azure Logic Apps](../connectors/managed.md). |
+| **Integration account** | Create this Azure resource when you want to define and store B2B artifacts for use in your workflows. After you [create and link an integration account](logic-apps-enterprise-integration-create-integration-account.md) to your logic app, your workflows can use these B2B artifacts. Your workflows can also exchange messages that follow Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards. <br><br>For example, you can define trading partners, agreements, schemas, maps, and other B2B artifacts. You can create workflows that use these artifacts and exchange messages over protocols such as AS2, EDIFACT, X12, and RosettaNet. |
-### Trigger
-
-A *trigger* is always the first step in any workflow and specifies the condition for running any further steps in that workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account.
+## Why use Azure Logic Apps
-### Action
+The Azure Logic Apps integration platform provides hundreds of prebuilt connectors so you can connect and integrate apps, data, services, and systems more easily and quickly. You can focus more on designing and implementing your solution's business logic and functionality, not on figuring out how to access your resources.
-An *action* is each step in a workflow after the trigger. Every action runs some operation in a workflow.
+To communicate with any service endpoint, run your own code, control your workflow structure, manipulate data, or connect to commonly used services with better performance, you can use [built-in connector operations](#logic-app-concepts). These operations run natively on the Azure Logic Apps runtime.
-### Built-in operations
+To access and run operations on resources in services such as Azure, Microsoft, other external web apps and services, or on-premises systems, you can use [Microsoft-managed (Azure-hosted) connector operations](#logic-app-concepts). Choose from [hundreds of connectors in a growing Azure ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors), for example:
-A *built-in* trigger or action is an operation that runs natively in Azure Logic Apps. For example, built-in operations provide ways for you to control your workflow's schedule or structure, run your own code, manage and manipulate data, send or receive requests to an endpoint, and complete other tasks in your workflow.
+* Azure services such as Blob Storage and Service Bus
-Most built-in operations aren't associated with any service or system, but some built-in operations are available for specific services, such as Azure Functions or Azure App Service. Many also don't require that you first create a connection from your workflow and authenticate your identity. For more information and examples, review [Built-in operations for Azure Logic Apps](../connectors/built-in.md).
+* Office 365 services such as Outlook, Excel, and SharePoint
-For example, you can start almost any workflow on a schedule when you use the Recurrence trigger. Or, you can have your workflow wait until called when you use the Request trigger.
+* Database servers such as SQL and Oracle
-### Managed connector
+* Enterprise systems such as SAP and IBM MQ
-A *managed connector* is a prebuilt proxy or wrapper for a REST API that you can use to access a specific app, data, service, or system. Before you can use most managed connectors, you must first create a connection from your workflow and authenticate your identity. Managed connectors are published, hosted, and maintained by Microsoft. For more information, review [Managed connectors for Azure Logic Apps](../connectors/managed.md).
+* File shares such as FTP and SFTP
-For example, you can start your workflow with a trigger or run an action that works with a service such as Office 365, Salesforce, or file servers.
+For more information, review the following documentation:
-### Integration account
+* [Connectors overview for Azure Logic Apps](../connectors/apis-list.md)
-An *integration account* is the Azure resource you create when you want to define and store B2B artifacts for use in your workflows. After you [create and link an integration account](logic-apps-enterprise-integration-create-integration-account.md) to your logic app, your workflows can use these B2B artifacts. Your workflows can also exchange messages that follow Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards.
+* [Managed connectors](../connectors/managed.md)
-For example, you can define trading partners, agreements, schemas, maps, and other B2B artifacts. You can create workflows that use these artifacts and exchange messages over protocols such as AS2, EDIFACT, X12, and RosettaNet.
+* [Built-in connectors](../connectors/built-in.md)
-<a name="how-do-logic-apps-work"></a>
+You usually won't have to write any code. However, if you do need to write code, you can create code snippets using [Azure Functions](../azure-functions/functions-overview.md) and run that code from your workflow. You can also create code snippets that run in your workflow by using the [**Inline Code** action](logic-apps-add-run-inline-code.md). If your workflow needs to interact with events from Azure services, custom apps, or other solutions, you can monitor, route, and publish events using [Azure Event Grid](../event-grid/overview.md).
-## How logic apps work
+Azure Logic Apps is fully managed by Microsoft Azure, which frees you from worrying about hosting, scaling, managing, monitoring, and maintaining solutions built with these services. When you use these capabilities to create ["serverless" apps and solutions](logic-apps-serverless-overview.md), you can just focus on the business logic and functionality. These services automatically scale to meet your needs, make integrations faster, and help you build robust cloud apps using little to no code.
-In a logic app resource, each workflow always starts with a single [trigger](#trigger). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. After the trigger fires, one or more [actions](#action) run operations that process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step.
+To learn how other companies improved their agility and increased focus on their core businesses when they combined Azure Logic Apps with other Azure services and Microsoft products, check out these [customer stories](https://aka.ms/logic-apps-customer-stories).
-The following screenshot shows part of an example enterprise workflow. This workflow uses conditions and switches to determine the next action. Let's say you have an order system, and your workflow processes incoming orders. You want to review orders above a certain cost manually. Your workflow already has previous steps that determine how much an incoming order costs. So, you create an initial condition based on that cost value. For example:
+## How does Azure Logic Apps differ from Functions, WebJobs, and Power Automate?
-* If the order is below a certain amount, the condition is false. So, the workflow processes the order.
+All these services help you connect and bring together disparate systems. Each service has their advantages and benefits, so combining their capabilities is the best way to quickly build a scalable, full-featured integration system. For more information, review [Choose between Logic Apps, Functions, WebJobs, and Power Automate](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md).
-* If the condition is true, the workflow sends an email for manual review. A switch determines the next step.
+## More about Azure Logic Apps
- * If the reviewer approves, the workflow continues to process the order.
+The following sections provide more information about the capabilities and benefits in Azure Logic Apps:
- * If the reviewer escalates, the workflow sends an escalation email to get more information about the order.
+### Visually create and edit workflows with easy-to-use tools
- * If the escalation requirements are met, the response condition is true. So, the order is processed.
+Save time and simplify complex processes by using the visual design tools in Azure Logic Apps. Create your workflows from start to finish by using the Azure Logic Apps workflow designer in the Azure portal, Visual Studio Code, or Visual Studio. Just start your workflow with a trigger, and add any number of actions from the [connectors gallery](/connectors/connector-reference/connector-reference-logicapps-connectors).
- * If the response condition is false, an email is sent regarding the problem.
+If you're creating a multi-tenant based logic app, get started faster when you [create a workflow from the templates gallery](logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
+### Connect different systems across various environments
-You can visually create workflows using the Azure Logic Apps workflow designer in the Azure portal, Visual Studio Code, or Visual Studio. Each workflow also has an underlying definition that's described using JavaScript Object Notation (JSON). If you prefer, you can edit workflows by changing this JSON definition. For some creation and management tasks, Azure Logic Apps provides Azure PowerShell and Azure CLI command support. For automated deployment, Azure Logic Apps supports Azure Resource Manager templates.
+Some patterns and processes are easy to describe but hard to implement in code. The Azure Logic Apps platform helps you seamlessly connect disparate systems across cloud, on-premises, and hybrid environments. For example, you can connect a cloud marketing solution to an on-premises billing system, or centralize messaging across APIs and systems using Azure Service Bus. Azure Logic Apps provides a fast, reliable, and consistent way to deliver reusable and reconfigurable solutions for these scenarios.
<a name="resource-environment-differences"></a>
-## Resource type and host environment differences
+### Create and deploy to different environments
-To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
+Based on your scenario, solution requirements, and desired capabilities, you'll choose to create either a Consumption or Standard logic app workflow. Based on this choice, the workflow runs in either multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, an App Service Environment (v3), or a dedicated integration service environment. With the last three environments, your workflows can more easily access resources protected by Azure virtual networks. If you create single tenant-based workflows using Azure Arc enabled Logic Apps, you can also run workflows in containers. For more information, see [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) and [What is Arc enabled Logic Apps](azure-arc-enabled-logic-apps-overview.md)?
-The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn the differences between the *single-tenant environment*, *multi-tenant environment*, *integration service environment* (ISE), and *App Service Environment v3 (ASEv3)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between a Consumption and Standard logic app workflow. You'll also learn the differences between the *multi-tenant environment*, *integration service environment* (ISE), *single-tenant environment*, and *App Service Environment v3 (ASEv3)* for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
-## Why use Azure Logic Apps
-
-The Azure Logic Apps integration platform provides prebuilt Microsoft-managed API connectors and built-in operations so you can connect and integrate apps, data, services, and systems more easily and quickly. You can focus more on designing and implementing your solution's business logic and functionality, not on figuring out how to access your resources.
-
-You usually won't have to write any code. However, if you do need to write code, you can create code snippets using [Azure Functions](../azure-functions/functions-overview.md) and run that code from your workflow. You can also create code snippets that run in your workflow by using the [**Inline Code** action](logic-apps-add-run-inline-code.md). If your workflow needs to interact with events from Azure services, custom apps, or other solutions, you can monitor, route, and publish events using [Azure Event Grid](../event-grid/overview.md).
-
-Azure Logic Apps is fully managed by Microsoft Azure, which frees you from worrying about hosting, scaling, managing, monitoring, and maintaining solutions built with these services. When you use these capabilities to create ["serverless" apps and solutions](logic-apps-serverless-overview.md), you can just focus on the business logic and functionality. These services automatically scale to meet your needs, make integrations faster, and help you build robust cloud apps using little to no code.
-
-To learn how other companies improved their agility and increased focus on their core businesses when they combined Azure Logic Apps with other Azure services and Microsoft products, check out these [customer stories](https://aka.ms/logic-apps-customer-stories).
-
-The following sections provide more information about the capabilities and benefits in Azure Logic Apps:
+### First-class support for enterprise integration and B2B scenarios
-#### Visually create and edit workflows with easy-to-use tools
-
-Save time and simplify complex processes by using the visual design tools in Azure Logic Apps. Create your workflows from start to finish by using the Azure Logic Apps workflow designer in the Azure portal, Visual Studio Code, or Visual Studio. Just start your workflow with a trigger, and add any number of actions from the [connectors gallery](/connectors/connector-reference/connector-reference-logicapps-connectors).
-
-If you're creating a multi-tenant based logic app, get started faster when you [create a workflow from the templates gallery](logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
-
-#### Connect different systems across various environments
-
-Some patterns and processes are easy to describe but hard to implement in code. The Azure Logic Apps platform helps you seamlessly connect disparate systems across cloud, on-premises, and hybrid environments. For example, you can connect a cloud marketing solution to an on-premises billing system, or centralize messaging across APIs and systems using Azure Service Bus. Azure Logic Apps provides a fast, reliable, and consistent way to deliver reusable and reconfigurable solutions for these scenarios.
-
-#### Write once, reuse often
-
-Create your logic apps as Azure Resource Manager templates so that you can [set up and automate deployments](logic-apps-azure-resource-manager-templates-overview.md) across multiple environments and regions.
-
-#### First-class support for enterprise integration and B2B scenarios
-
-Businesses and organizations electronically communicate with each other by using industry-standard but different message protocols and formats, such as EDIFACT, AS2, X12, and RosettaNet. By using the [enterprise integration capabilities](logic-apps-enterprise-integration-overview.md) supported by Azure Logic Apps, you can create workflows that transform message formats used by trading partners into formats that your organization's systems can interpret and process. Azure Logic Apps handles these exchanges smoothly and securely with encryption and digital signatures.
+Businesses and organizations electronically communicate with each other by using industry-standard but different message protocols and formats, such as EDIFACT, AS2, X12, and RosettaNet. By using the [enterprise integration capabilities](logic-apps-enterprise-integration-overview.md) supported by Azure Logic Apps, you can create workflows that transform message formats used by trading partners into formats that your organization's systems can interpret and process. Azure Logic Apps handles these exchanges smoothly and securely with encryption and digital signatures. For B2B integration scenarios, Azure Logic Apps includes capabilities from [BizTalk Server](/biztalk/core/introducing-biztalk-server). To define business-to-business (B2B) artifacts, you create an [*integration account*](#logic-app-concepts) where you store these artifacts. After you link this account to your logic app, your workflows can use these B2B artifacts and exchange messages that comply with Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards. For more information, review the following documentation:
You can start small with your current systems and services, and then grow incrementally at your own pace. When you're ready, the Azure Logic Apps platform helps you implement and scale up to more mature integration scenarios by providing these capabilities and more:
You can start small with your current systems and services, and then grow increm
For example, if you use Microsoft BizTalk Server, your workflows can communicate with your BizTalk Server using the [BizTalk Server connector](../connectors/managed.md#on-premises-connectors). You can then run or extend BizTalk-like operations in your workflows by using [integration account connectors](../connectors/managed.md#integration-account-connectors). In the other direction, BizTalk Server can communicate with your workflows by using the [Microsoft BizTalk Server Adapter for Azure Logic Apps](https://www.microsoft.com/download/details.aspx?id=54287). Learn how to [set up and use the BizTalk Server Adapter](/biztalk/core/logic-app-adapter) in your BizTalk Server.
-#### Built-in extensibility
+### Write once, reuse often
+
+Create your logic apps as Azure Resource Manager templates so that you can [set up and automate deployments](logic-apps-azure-resource-manager-templates-overview.md) across multiple environments and regions.
+
+### Built-in extensibility
If no suitable connector is available to run the code you want, you can create and call your own code snippets from your workflow by using [Azure Functions](../azure-functions/functions-overview.md). Or, create your own [APIs](logic-apps-create-api-app.md) and [custom connectors](custom-connector-overview.md) that you can call from your workflows.
-#### Access resources inside Azure virtual networks
+### Access resources inside Azure virtual networks
Logic app workflows can access secured resources such as virtual machines (VMs), other services, and systems that are inside an [Azure virtual network](../virtual-network/virtual-networks-overview.md) when you create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is a dedicated instance of the Azure Logic Apps service that uses dedicated resources and runs separately from the global multi-tenant Azure Logic Apps service.
Running logic apps in your own dedicated instance helps reduce the impact that o
When you create an ISE, Azure *injects* or deploys that ISE into your Azure virtual network. You can then use this ISE as the location for the logic apps and integration accounts that need access. For more information about creating an ISE, review [Connect to Azure virtual networks from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment.md).
-#### Pricing options
+<a name="how-do-logic-apps-work"></a>
-Each logic app resource type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](logic-apps-pricing.md). For example, multi-tenant based logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](logic-apps-pricing.md) for Azure Logic Apps.
+## How logic apps work
-## How does Azure Logic Apps differ from Functions, WebJobs, and Power Automate?
+In a logic app, each workflow always starts with a single [trigger](#logic-app-concepts). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. After the trigger fires, one or more [actions](#logic-app-concepts) run operations that process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step.
-All these services help you connect and bring together disparate systems. Each service has their advantages and benefits, so combining their capabilities is the best way to quickly build a scalable, full-featured integration system. For more information, review [Choose between Logic Apps, Functions, WebJobs, and Power Automate](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md).
+The following screenshot shows part of an example enterprise workflow. This workflow uses conditions and switches to determine the next action. Let's say you have an order system, and your workflow processes incoming orders. You want to review orders above a certain cost manually. Your workflow already has previous steps that determine how much an incoming order costs. So, you create an initial condition based on that cost value. For example:
+
+* If the order is below a certain amount, the condition is false. So, the workflow processes the order.
+
+* If the condition is true, the workflow sends an email for manual review. A switch determines the next step.
+
+ * If the reviewer approves, the workflow continues to process the order.
+
+ * If the reviewer escalates, the workflow sends an escalation email to get more information about the order.
+
+ * If the escalation requirements are met, the response condition is true. So, the order is processed.
+
+ * If the response condition is false, an email is sent regarding the problem.
++
+You can visually create workflows using the Azure Logic Apps workflow designer in the Azure portal, Visual Studio Code, or Visual Studio. Each workflow also has an underlying definition that's described using JavaScript Object Notation (JSON). If you prefer, you can edit workflows by changing this JSON definition. For some creation and management tasks, Azure Logic Apps provides Azure PowerShell and Azure CLI command support. For automated deployment, Azure Logic Apps supports Azure Resource Manager templates.
+
+## Pricing options
+
+Each logic app resource type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](logic-apps-pricing.md). For example, multi-tenant based logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](logic-apps-pricing.md) for Azure Logic Apps.
## Get started
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
ms.suite: integration Previously updated : 08/08/2022+ Last updated : 09/07/2022 # As a developer, I want to connect to my Standard logic app workflows with virtual networks using private endpoints and virtual network integration.
logic-apps Tutorial Build Schedule Recurring Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md
ms.suite: integration -+ Last updated 09/13/2022
In this tutorial, you learn how to:
> * Add a Bing Maps action that gets the travel time for a route. > * Add an action that creates a variable, converts the travel time from seconds to minutes, and stores that result in the variable. > * Add a condition that compares the travel time against a specified limit.
-> * Add an action that sends you email if the travel time exceeds the limit.
+> * Add an action that sends an email if the travel time exceeds the limit.
When you're done, your workflow looks similar to the following high level example:
Next, add the Recurrence [trigger](../logic-apps/logic-apps-overview.md#logic-ap
| Property | Value | Description | |-|-|-|
- | **On these days** | Monday,Tuesday,Wednesday,Thursday,Friday | This setting is available only when you set the **Frequency** to **Week**. |
- | **At these hours** | 7,8,9 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the hours of the day. This example runs at the `7`, `8`, and `9`-hour marks. |
- | **At these minutes** | 0,15,30,45 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the minutes of the day. This example starts at the zero-hour mark and runs every 15 minutes. |
+ | **On these days** | Monday, Tuesday, Wednesday, Thursday, Friday | This setting is available only when you set the **Frequency** to **Week**. |
+ | **At these hours** | 7, 8, 9 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the hours of the day. This example runs at the `7`, `8`, and `9`-hour marks. |
+ | **At these minutes** | 0, 15, 30, 45 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the minutes of the day. This example starts at the zero-hour mark and runs every 15 minutes. |
|||| This trigger fires every weekday, every 15 minutes, starting at 7:00 AM and ending at 9:45 AM. The **Preview** box shows the recurrence schedule. For more information, see [Schedule tasks and workflows](../connectors/connectors-native-recurrence.md) and [Workflow actions and triggers](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger).
Now, add an action that sends you email when the travel time exceeds your limit.
1. In the condition's **True** branch, select **Add an action**.
-1. Under **Choose an operation**, select **Standard**. In the search box, enter **send email**. The list returns many results, so to help you filter list, first select the email connector that you want.
+1. Under **Choose an operation**, select **Standard**. In the search box, enter **send email**. The list returns many results, so to help you filter the list, first select the email connector that you want.
For example, if you have an Outlook email account, select the connector for your account type:
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
ms.suite: integration Previously updated : 05/05/2022+ Last updated : 09/20/2022 # Reference guide to workflow expression functions in Azure Logic Apps and Power Automate
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
The following table highlights the key differences between managed online endpoi
| **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported | | **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported | | **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
-| **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported
+| **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
+| **No-code deployment** | Supported [MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models | Supported [MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models |
### Managed online endpoints
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Following is a sample policy to default a shutdown schedule at 10 PM PST.
} ```
+## Assign managed identity (preview)
+
+You can assign a system- or user-assigned [managed identity](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) to a compute instance, to autheticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example you can allow users to access training data only when logged in to compute instance, or use a common user-assigned managed identity to permit access to a specific storage account.
+
+You can create compute instance with managed identity from Azure ML Studio:
+
+1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create).
+1. Select **Next: Advanced Settings**.
+1. Enable **Assign a managed identity**.
+1. Select **System-assigned** or **User-assigned** under **Identity type**.
+1. If you selected **User-assigned**, select subscription and name of the identity.
+
+You can use V2 CLI to create compute instance with assign system-assigned managed identity:
+
+```azurecli
+az ml compute create --name myinstance --identity-type SystemAssigned --type ComputeInstance --resource-group my-resource-group --workspace-name my-workspace
+```
+
+You can also use V2 CLI with yaml file, for example to create a compute instance with user-assigned managed identity:
+
+```azurecli
+azure ml compute create --file compute.yaml --resource-group my-resource-group --workspace-name my-workspace
+```
+
+The identity definition is contained in compute.yaml file:
+
+```yaml
+https://azuremlschemas.azureedge.net/latest/computeInstance.schema.json
+name: myinstance
+type: computeinstance
+identity:
+ type: user_assigned
+ user_assigned_identities:
+ - resource_id: identity_resource_id
+```
+
+Once the managed identity is created, enable [identity-based data access enabled](how-to-identity-based-data-access.md) to your storage accounts for that identity. Then, when you worki on the compute instance, the managed identity is used automatically to authenticate against data stores.
+
+You can also use the managed identity manually to authenticate against other Azure resources. For example, to use it to get ARM access token, use following.
+
+```python
+import requests
+
+def get_access_token_msi(resource):
+ client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None)
+ resp = requests.get(f"{os.environ['MSI_ENDPOINT']}?resource={resource}&clientid={client_id}&api-version=2017-09-01", headers={'Secret': os.environ["MSI_SECRET"]})
+ resp.raise_for_status()
+ return resp.json()["access_token"]
+
+arm_access_token = get_access_token_msi("https://management.azure.com")
+```
+ ## Add custom applications such as RStudio (preview) You can set up other applications, such as RStudio, when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance
You can set up other applications, such as RStudio, when creating a compute inst
1. Select **Add application** under the **Custom application setup (RStudio Workbench, etc.)** section :::image type="content" source="media/how-to-create-manage-compute-instance/custom-service-setup.png" alt-text="Screenshot showing Custom Service Setup.":::
-
+ ### Setup RStudio Workbench RStudio is one of the most popular IDEs among R developers for ML and data science projects. You can easily set up RStudio Workbench to run on your compute instance, using your own RStudio license, and access the rich feature set that RStudio Workbench offers.
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
-+ ms.devlang: azurecli
ms.devlang: azurecli
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [Managed online endpoints](concept-endpoints.md#managed-online-endpoints).
+Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [online endpoints](concept-endpoints.md#what-are-online-endpoints).
-Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads.
+Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. No-code deployment for Triton models are supported in both [managed online endpoints and Kubernetes online endpoints](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
-In this article, you will learn how to deploy Triton and a model to a managed online endpoint. Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
+In this article, you will learn how to deploy Triton and a model to a [managed online endpoint](concept-endpoints.md#managed-online-endpoints). Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
> [!NOTE] > * [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
-> * While Azure Machine Learning online endpoints are generally available, _using Triton with an online endpoint deployment is still in preview_.
+> * While Azure Machine Learning online endpoints are generally available, _using Triton with an online endpoint/deployment is still in preview_.
## Prerequisites
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
+
+ Title: Azure DevOps for CI/CD
+
+description: Use Azure Pipelines for flexible MLOps automation
+++++ Last updated : 09/28/2022++++
+# Use Azure Pipelines with Azure Machine Learning
+
+**Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019**
+
+You can use an [Azure DevOps pipeline](/azure/devops/pipelines/) to automate the machine learning lifecycle. Some of the operations you can automate are:
+
+* Data preparation (extract, transform, load operations)
+* Training machine learning models with on-demand scale-out and scale-up
+* Deployment of machine learning models as public or private web services
+* Monitoring deployed machine learning models (such as for performance or data-drift analysis)
+
+This article will teach you how to create an Azure Pipeline that builds and deploys a machine learning model to [Azure Machine Learning](overview-what-is-azure-machine-learning.md). You'll train a scikit-learn linear regression model on the Diabetes dataset.
+
+This tutorial uses [Azure Machine Learning Python SDK v2](/python/api/overview/azure/ml/installv2) and [Azure CLI ML extension v2](/cli/azure/ml).
+
+## Prerequisites
+
+Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
+* Create a workspace
+* Create a cloud-based compute instance to use for your development environment
+* Create a cloud-based compute cluster to use for training your model
+
+## Step 1: Get the code
+
+Fork the following repo at GitHub:
+
+```
+https://github.com/azure/azureml-examples
+```
+
+## Step 2: Sign in to Azure Pipelines
+++
+## Step 3: Create an Azure Resource Manager connection
+
+You'll need an Azure Resource Manager connection to authenticate with Azure portal.
+
+1. In Azure DevOps, open the **Service connections** page.
+
+1. Choose **+ New service connection** and select **Azure Resource Manager**.
+
+1. Select the default authentication method, **Service principal (automatic)**.
+
+1. Create your service connection. Set your subscription, resource group, and connection name.
+
+ :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-arm-connection.png" alt-text="Screenshot of ARM service connection.":::
++
+## Step 4: Create a pipeline
+
+1. Go to **Pipelines**, and then select **New pipeline**.
+
+1. Do the steps of the wizard by first selecting **GitHub** as the location of your source code.
+
+1. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
+
+1. When you see the list of repositories, select your repository.
+
+1. You might be redirected to GitHub to install the Azure Pipelines app. If so, select **Approve & install**.
+
+1. Select the **Starter pipeline**. You'll update the starter pipeline template.
+
+## Step 5: Create variables
+
+You should already have a resource group in Azure with [Azure Machine Learning](overview-what-is-azure-machine-learning.md). To deploy your DevOps pipeline to AzureML, you'll need to create variables for your subscription ID, resource group, and machine learning workspace.
+
+1. Select the Variables tab on your pipeline edit page.
+
+ :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-select-variables.png" alt-text="Screenshot of variables option in pipeline edit. ":::
+
+1. Create a new variable, `Subscription_ID`, and select the checkbox **Keep this value secret**. Set the value to your [Azure portal subscription ID](/azure/azure-portal/get-subscription-tenant-id).
+1. Create a new variable for `Resource_Group` with the name of the resource group for Azure Machine Learning (example: `machinelearning`).
+1. Create a new variable for `AzureML_Workspace_Name` with the name of your Azure ML workspace (example: `docs-ws`).
+1. Select **Save** to save your variables.
+
+## Step 6: Build your YAML pipeline
+
+Delete the starter pipeline and replace it with the following YAML code. In this pipeline, you'll:
+
+* Use the Python version task to set up Python 3.8 and install the SDK requirements.
+* Use the Bash task to run bash scripts for the Azure Machine Learning SDK and CLI.
+* Use the Azure CLI task to pass the values of your three variables and use papermill to run your Jupyter notebook and push output to AzureML.
+
+```yaml
+trigger:
+- main
+
+pool:
+ vmImage: ubuntu-latest
+
+steps:
+- task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.8'
+- script: pip install -r sdk/dev-requirements.txt
+ displayName: 'pip install notebook reqs'
+- task: Bash@3
+ inputs:
+ filePath: 'sdk/setup.sh'
+ displayName: 'set up sdk'
+
+- task: Bash@3
+ inputs:
+ filePath: 'cli/setup.sh'
+ displayName: 'set up CLI'
+
+- task: AzureCLI@2
+ inputs:
+ azureSubscription: 'your-azure-subscription'
+ scriptType: 'bash'
+ scriptLocation: 'inlineScript'
+ inlineScript: |
+ sed -i -e "s/<SUBSCRIPTION_ID>/$(SUBSCRIPTION_ID)/g" sklearn-diabetes.ipynb
+ sed -i -e "s/<RESOURCE_GROUP>/$(RESOURCE_GROUP)/g" sklearn-diabetes.ipynb
+ sed -i -e "s/<AML_WORKSPACE_NAME>/$(AZUREML_WORKSPACE_NAME)/g" sklearn-diabetes.ipynb
+ sed -i -e "s/DefaultAzureCredential/AzureCliCredential/g" sklearn-diabetes.ipynb
+ papermill -k python sklearn-diabetes.ipynb sklearn-diabetes.output.ipynb
+ workingDirectory: 'sdk/jobs/single-step/scikit-learn/diabetes'
+```
++
+## Step 7: Verify your pipeline run
+
+1. Open your completed pipeline run and view the AzureCLI task. Check the task view to verify that the output task finished running.
+
+ :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-azurecli-output.png" alt-text="Screenshot of machine learning output to AzureML.":::
+
+1. Open Azure Machine Learning studio and navigate to the completed `sklearn-diabetes-example` job. On the **Metrics** tab, you should see the training results.
+
+ :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-training-results.png" alt-text="Screenshot of training results.":::
+
+## Clean up resources
+
+If you're not going to continue to use your pipeline, delete your Azure DevOps project. In Azure portal, delete your resource group and Azure Machine Learning instance.
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Last updated 04/28/2022 -+ # Log metrics, parameters and files with MLflow + > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"] > * [v1](./v1/how-to-log-view-metrics.md) > * [v2 (current)](how-to-log-view-metrics.md)
machine-learning How To Use Parallel Job In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-parallel-job-in-pipeline.md
+
+ Title: How to use parallel job in pipeline
+
+description: How to use parallel job in Azure Machine Learning pipeline using CLI v2 and Python SDK
++++++ Last updated : 09/27/2022+++
+# How to use parallel job in pipeline (V2) (preview)
++
+Parallel job lets users accelerate their job execution by distributing repeated tasks on powerful multi-nodes compute clusters. For example, take the scenario where you're running an object detection model on large set of images. With Azure ML Parallel job, you can easily distribute your images to run custom code in parallel on a specific compute cluster. Parallelization could significantly reduce the time cost. Also by using Azure ML parallel job you can simplify and automate your process to make it more efficient.
+
+## Prerequisite
+
+Azure ML parallel job can only be used as one of steps in a pipeline job. Thus, it's important to be familiar with using pipelines. To learn more about Azure ML pipelines, see the following articles.
+
+- Understand what is a [Azure Machine Learning pipeline](concept-ml-pipelines.md)
+- Understand how to use Azure ML pipeline with [CLI v2](how-to-create-component-pipelines-cli.md) and [SDK v2](how-to-create-component-pipeline-python.md).
+
+## Why are parallel jobs needed?
+
+In the real world, ML engineers always have scale requirements on their training or inferencing tasks. For example, when a data scientist provides a single script to train a sales prediction model, ML engineers need to apply this training task to each individual store. During this scale out process, some challenges are:
+
+- Delay pressure caused by long execution time.
+- Manual intervention to handle unexpected issues to keep the task proceeding.
+
+The core value of Azure ML parallel job is to split a single serial task into mini-batches and dispatch those mini-batches to multiple computes to execute in parallel. By using parallel jobs, we can:
+
+ - Significantly reduce end-to-end execution time.
+ - Use Azure ML parallel job's automatic error handling settings.
+
+You should consider using Azure ML Parallel job if:
+
+ - You plan to train many models on top of your partitioned data.
+ - You want to accelerate your large scale batch inferencing task.
+
+## Prepare for parallel job
+
+Unlike other types of jobs, a parallel job requires preparation. Follow the next sections to prepare for creating your parallel job.
+
+### Declare the inputs to be distributed and partition setting
+
+Parallel job requires only one **major input data** to be split and processed with parallel. The major input data can be either tabular data or a set of files. Different input types can have a different partition method.
+
+The following table illustrates the relation between input data and partition setting:
+
+| Data format | AML input type | AML input mode | Partition method |
+|: - |: - |: - |: |
+| File list | `mltable` or<br>`uri_folder` | ro_mount or<br>download | By size (number of files) |
+| Tabular data | `mltable` | direct | By size (estimated physical size) |
+
+You can declare your major input data with `input_data` attribute in parallel job YAML or Python SDK. And you can bind it with one of your defined `inputs` of your parallel job by using `${{inputs.<input name>}}`. Then to define the partition method for your major input.
+
+For example, you could set numbers to `mini_batch_size` to partition your data **by size**.
+
+- When using file list input, this value defines the number of files for each mini-batch.
+- When using tabular input, this value defines the estimated physical size for each mini-batch.
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+Declare `job_data_path` as one of the inputs. Bind it to `input_data` attribute.
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=parallel-job-for-file-data)]
+++
+Once you have the partition setting defined, you can configure parallel setting by using two attributes below:
+
+| Attribute name | Type | Description | Default value |
+|:-|--|:-|--|
+| `instance_count` | integer | The number of nodes to use for the job. | 1 |
+| `max_concurrency_per_instance` | integer | The number of processors on each node. | For a GPU compute, the default value is 1.<br> For a CPU compute, the default value is the number of cores. |
+
+These two attributes work together with your specified compute cluster.
++
+Sample code to set two attributes:
+
+# [Azure CLI](#tab/cliv2)
++++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=parallel-job-for-file-data)]
++
+> [!NOTE]
+> If you use tabular `mltable` as your major input data, you need to have the MLTABLE specification file with `transformations - read_delimited` section filled under your specific path. For more examples, see [Create a mltable data asset](how-to-create-register-data-assets.md#create-a-mltable-data-asset)
+
+### Implement predefined functions in entry script
+
+Entry script is a single python file where user needs to implement three predefined functions with custom code. Azure ML parallel job follows the diagram below to execute them in each processor.
++
+| Function name | Required | Description | Input | Return |
+| : | -- | :- | :- | :-- |
+| Init() | Y | Use this function for common preparation before starting to run mini-batches. For example, use it to load the model into a global object. | -- | -- |
+| Run(mini_batch) | Y | Implement main execution logic for mini_batches. | mini_batch: <br>Pandas dataframe if input data is a tabular data.<br> List of file path if input data is a directory. | Dataframe, List, or Tuple. |
+| Shutdown() | N | Optional function to do custom cleans up before returning the compute back to pool. | -- | -- |
+
+Check the following entry script examples to get more details:
+
+- [Image identification for a list of image files](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/parallel-run/Code/digit_identification.py)
+- [Iris classification for a tabular iris data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/parallel-run/Code/iris_score.py)
+
+Once you have entry script ready, you can set following two attributes to use it in your parallel job:
+
+| Attribute name | Type | Description | Default value |
+|: - | - |: - | - |
+| `code` | string | Local path to the source code directory to be uploaded and used for the job. | |
+| `entry_script` | string | The python file that contains the implementation of pre-defined parallel functions. | |
+
+Sample code to set two attributes:
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=parallel-job-for-file-data)]
++
+> [!IMPORTANT]
+> Run(mini_batch) function requires a return of either a dataframe, list, or tuple item. Parallel job will use the count of that return to measure the success items under that mini-batch. Ideally mini-batch count should be equal to the return list count if all items have well processed in this mini-batch.
+
+> [!IMPORTANT]
+> If you want to parse arguments in Init() or Run(mini_batch) function, use "parse_known_args" instead of "parse_args" for avoiding exceptions. See the [iris_score](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/parallel-run/Code/iris_score.py) example for entry script with argument parser.
+
+### Consider automation settings
+
+Azure ML parallel job exposes numerous settings to automatically control the job without manual intervention. See the following table for the details.
+
+| Key | Type | Description | Allowed values | Default value | Set in attribute | Set in program arguments |
+|--|--|--|--|--|--|--|
+| mini batch error threshold | integer | Define the number of failed **mini batches** that could be ignored in this parallel job. If the count of failed mini-batch is higher than this threshold, the parallel job will be marked as failed.<br><br>Mini-batch is marked as failed if:<br>- the count of return from run() is less than mini-batch input count.<br>- catch exceptions in custom run() code.<br><br>"-1" is the default number, which means to ignore all failed mini-batch during parallel job. | [-1, int.max] | -1 | mini_batch_error_threshold | N/A |
+| mini batch max retries | integer | Define the number of retries when mini-batch is failed or timeout. If all retries are failed, the mini-batch will be marked as failed to be counted by `mini_batch_error_threshold` calculation. | [0, int.max] | 2 | retry_settings.max_retries | N/A |
+| mini batch timeout | integer | Define the timeout in seconds for executing custom run() function. If the execution time is higher than this threshold, the mini-batch will be aborted, and marked as a failed mini-batch to trigger retry. | (0, 259200] | 60 | retry_settings.timeout | N/A |
+| item error threshold | integer | The threshold of failed **items**. Failed items are counted by the number gap between inputs and returns from each mini-batch. If the sum of failed items is higher than this threshold, the parallel job will be marked as failed.<br><br>Note: "-1" is the default number, which means to ignore all failures during parallel job. | [-1, int.max] | -1 | N/A | --error_threshold |
+| allowed failed percent | integer | Similar to `mini_batch_error_threshold` but uses the percent of failed mini-batches instead of the count. | [0, 100] | 100 | N/A | --allowed_failed_percent |
+| overhead timeout | integer | The timeout in second for initialization of each mini-batch. For example, load mini-batch data and pass it to run() function. | (0, 259200] | 600 | N/A | --task_overhead_timeout |
+| progress update timeout | integer | The timeout in second for monitoring the progress of mini-batch execution. If no progress updates receive within this timeout setting, the parallel job will be marked as failed. | (0, 259200] | Dynamically calculated by other settings. | N/A | --progress_update_timeout |
+| first task creation timeout | integer | The timeout in second for monitoring the time between the job start to the run of first mini-batch. | (0, 259200] | 600 | N/A | --first_task_creation_timeout |
+| logging level | string | Define which level of logs will be dumped to user log files. | INFO, WARNING, or DEBUG | INFO | logging_level | N/A |
+| append row to | string | Aggregate all returns from each run of mini-batch and output it into this file. May reference to one of the outputs of parallel job by using the expression ${{outputs.<output_name>}} | | | task.append_row_to | N/A |
+| copy logs to parent | string | Boolean option to whether copy the job progress, overview, and logs to the parent pipeline job. | True or False | False | N/A | --copy_logs_to_parent |
+| resource monitor interval | integer | The time interval in seconds to dump node resource usage(for example, cpu, memory) to log folder under "logs/sys/perf" path.<br><br>Note: Frequent dump resource logs will slightly slow down the execution speed of your mini-batch. Set this value to "0" to stop dumping resource usage. | [0, int.max] | 600 | N/A | --resource_monitor_interval |
+
+Sample code to update these settings:
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=parallel-job-for-tabular-data)]
++
+## Create parallel job in pipeline
+
+# [Azure CLI](#tab/cliv2)
++
+You can create your parallel job inline with your pipeline job:
+
+# [Python](#tab/python)
++
+First, you need to import the required libraries, initiate your ml_client with proper credential, and create/retrieve your computes:
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=required-library)]
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=credential)]
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=workspace)]
+
+Then implement your parallel job by filling `parallel_run_function`:
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=parallel-job-for-tabular-data)]
+
+Finally use your parallel job as a step in your pipeline and bind its inputs/outputs with other steps:
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=build-pipeline)]
+++
+## Submit pipeline job and check parallel step in Studio UI
+
+# [Azure CLI](#tab/cliv2)
++
+You can submit your pipeline job with parallel step by using the CLI command:
+
+```azurecli
+az ml job create --file pipeline.yml
+```
+
+# [Python](#tab/python)
++
+You can submit your pipeline job with parallel step by using `jobs.create_or_update` function of ml_client:
+
+[!notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb?name=submit-pipeline)]
+++
+Once you submit your pipeline job, the SDK or CLI widget will give you a web URL link to the Studio UI. The link will guide you to the pipeline graph view by default. Double select the parallel step to open the right panel of your parallel job.
+
+To check the settings of your parallel job, navigate to **Parameters** tab, expand **Run settings**, and check **Parallel** section:
++
+To debug the failure of your parallel job, navigate to **Outputs + Logs** tab, expand **logs** folder from output directories on the left, and check **job_result.txt** to understand why the parallel job is failed. For more detail about logging structure of parallel job, see the **readme.txt** under the same folder.
++
+## Parallel job in pipeline examples
+
+- Azure CLI + YAML:
+ - [Iris prediction using parallel](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines/iris-batch-prediction-using-parallel) (tabular input)
+ - [mnist identification using parallel](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines/mnist-batch-identification-using-parallel) (file list input)
+- SDK:
+ - [Pipeline with parallel run function](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb)
+
+## Next steps
+
+- For the detailed yaml schema of parallel job, see the [YAML reference for parallel job](reference-yaml-job-parallel.md).
+- For how to onboard your data into MLTABLE, see [Create a mltable data asset](how-to-create-register-data-assets.md#create-a-mltable-data-asset).
+- For how to regularly trigger your pipeline, see [how to schedule pipeline](how-to-schedule-pipeline-job.md).
machine-learning Migrate To V2 Resource Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md
+
+ Title: 'Migrate compute management from SDK v1 to v2'
+
+description: Migrate compute management from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/28/2022++++
+# Migrate compute management from SDK v1 to v2
+
+The compute management functionally remains unchanged with the v2 development platform.
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
++
+## Create compute instance
+
+* SDK v1
+
+ ```python
+ import datetime
+ import time
+
+ from azureml.core.compute import ComputeTarget, ComputeInstance
+ from azureml.core.compute_target import ComputeTargetException
+
+ # Compute Instances need to have a unique name across the region.
+ # Here we create a unique name with current datetime
+ ci_basic_name = "basic-ci" + datetime.datetime.now().strftime("%Y%m%d%H%M")
+
+ compute_config = ComputeInstance.provisioning_configuration(
+ vm_size='STANDARD_DS3_V2'
+ )
+ instance = ComputeInstance.create(ws, ci_basic_name , compute_config)
+ instance.wait_for_completion(show_output=True)
+ ```
+
+* SDK v2
+
+ ```python
+ # Compute Instances need to have a unique name across the region.
+ # Here we create a unique name with current datetime
+ from azure.ai.ml.entities import ComputeInstance, AmlCompute
+ import datetime
+
+ ci_basic_name = "basic-ci" + datetime.datetime.now().strftime("%Y%m%d%H%M")
+ ci_basic = ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2")
+ ml_client.begin_create_or_update(ci_basic)
+ ```
+
+## Create compute cluster
+
+* SDK v1
+
+ ```python
+ from azureml.core.compute import ComputeTarget, AmlCompute
+ from azureml.core.compute_target import ComputeTargetException
+
+ # Choose a name for your CPU cluster
+ cpu_cluster_name = "cpucluster"
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS3_V2',
+ max_nodes=4)
+ cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
+ cpu_cluster.wait_for_completion(show_output=True)
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import AmlCompute
+ cpu_cluster_name = "cpucluster"
+ cluster_basic = AmlCompute(
+ name=cpu_cluster_name,
+ type="amlcompute",
+ size="STANDARD_DS3_v2",
+ max_instances=4
+ )
+ ml_client.begin_create_or_update(cluster_basic)
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[Method/API in SDK v1 (use links to ref docs)](/python/api/azureml-core/azureml.core.compute.amlcompute(class))|[Method/API in SDK v2 (use links to ref docs)](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)|
+
+## Next steps
+
+* [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md)
+* [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md)
machine-learning Reference Yaml Job Parallel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-parallel.md
+
+ Title: 'CLI (v2) parallel job YAML schema'
+
+description: Reference documentation for the CLI (v2) parallel job YAML schema.
+++++++ Last updated : 09/27/2022++
+# CLI (v2) parallel job YAML schema
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](v1/reference-pipeline-yaml.md)
+> * [v2 (current version)](reference-yaml-job-pipeline.md)
+
+> [!IMPORTANT]
+> Parallel job can only be used as a single step inside an Azure ML pipeline job. Thus, there is no source JSON schema for parallel job at this time. This document lists the valid keys and their values when creating a parallel job in a pipeline.
++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | const | **Required.** The type of job. | `parallel` | |
+| `inputs` | object | Dictionary of inputs to the parallel job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `program_arguments` using the `${{ inputs.<input_name> }}` expression. <br><br> Parallel job inputs can be referenced by pipeline inputs using the `${{ parent.inputs.<input_name> }}` expression. For how to bind the inputs of a parallel step to the pipeline inputs, see the [Expression syntax for binding inputs and outputs between steps in a pipeline job](reference-yaml-core-syntax.md#binding-inputs-and-outputs-between-steps-in-a-pipeline-job). | | |
+| `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
+| `outputs` | object | Dictionary of output configurations of the parallel job. The key is a name for the output within the context of the job and the value is the output configuration. <br><br> Parallel job outputs can be referenced by pipeline outputs using the `${{ parents.outputs.<output_name> }}` expression. For how to bind the outputs of a parallel step to the pipeline outputs, see the [Expression syntax for binding inputs and outputs between steps in a pipeline job](reference-yaml-core-syntax.md#binding-inputs-and-outputs-between-steps-in-a-pipeline-job). | |
+| `outputs.<output_name>` | object | You can leave the object empty, in which case by default the output will be of type `uri_folder` and Azure ML will system-generate an output location for the output based on the following templatized path: `{settings.datastore}/azureml/{job-name}/{output-name}/`. File(s) to the output directory will be written via read-write mount. If you want to specify a different mode for the output, provide an object containing the [job output specification](#job-outputs). | |
+| `compute` | string | Name of the compute target to execute the job on. The value can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. <br><br> When using parallel job in pipeline, you can leave this setting empty, in which case the compute will be auto-selected by the `default_compute` of pipeline.| | `local` |
+| `task` | object | **Required.** The template for defining the distributed tasks for parallel job. See [Attributes of the `task` key](#attributes-of-the-task-key).|||
+|`input_data`| object | **Required.** Define which input data will be split into mini-batches to run the parallel job. Only applicable for referencing one of the parallel job `inputs` by using the `${{ inputs.<input_name> }}` expression|||
+| `mini_batch_size` | string | Define the size of each mini-batch to split the input.<br><br> If the input_data is a folder or set of files, this number defines the **file count** for each mini-batch. For example, 10, 100.<br>If the input_data is a tabular data from `mltable`, this number defines the proximate physical size for each mini-batch. For example, 100 kb, 100 mb. ||1|
+| `mini_batch_error_threshold` | integer | Define the number of failed mini batches that could be ignored in this parallel job. If the count of failed mini-batch is higher than this threshold, the parallel job will be marked as failed.<br><br>Mini-batch is marked as failed if:<br> - the count of return from run() is less than mini-batch input count. <br> - catch exceptions in custom run() code.<br><br> "-1" is the default number, which means to ignore all failed mini-batch during parallel job.|[-1, int.max]|-1|
+| `logging_level` | string | Define which level of logs will be dumped to user log files. |INFO, WARNING, DEBUG|INFO|
+| `resources.instance_count` | integer | The number of nodes to use for the job. | | 1 |
+| `max_concurrency_per_instance` | integer| Define the number of processes on each node of compute.<br><br>For a GPU compute, the default value is 1.<br>For a CPU compute, the default value is the number of cores.|||
+| `retry_settings.max_retries` | integer | Define the number of retries when mini-batch is failed or timeout. If all retries are failed, the mini-batch will be marked as failed to be counted by `mini_batch_error_threshold` calculation. ||2|
+| `retry_settings.timeout` | integer | Define the timeout in seconds for executing custom run() function. If the execution time is higher than this threshold, the mini-batch will be aborted, and marked as a failed mini-batch to trigger retry.|(0, 259200]|60|
+
+### Attributes of the `task` key
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | const | **Required.** The type of task. Only applicable for `run_function` by now.<br><br> In `run_function` mode, you're required to provide `code`, `entry_script`, and `program_arguments` to define python script with executable functions and arguments. Note: Parallel job only supports python script in this mode. | run_function | run_function |
+| `code` | string | Local path to the source code directory to be uploaded and used for the job. |||
+| `entry_script` | string | The python file that contains the implementation of pre-defined parallel functions. For more information, see [Prepare entry script to parallel job](). |||
+| `environment` | string or object | **Required** The environment to use for running the task. The value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an inline environment, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they aren't supported for inline environments.|||
+| `environment_variables` | object | Dictionary of environment variable key-value pairs to set on the process where the command is executed. |||
+| `program_arguments` | string | The arguments to be passed to the entry script. May contain "--\<arg_name\> ${{inputs.\<intput_name\>}}" reference to inputs or outputs.<br><br> Parallel job provides a list of predefined arguments to set configuration of parallel run. For more information, see [predefined arguments for parallel job](#predefined-arguments-for-parallel-job). |||
+| `append_row_to` | string | Aggregate all returns from each run of mini-batch and output it into this file. May reference to one of the outputs of parallel job by using the expression \${{outputs.<output_name>}} |||
+
+### Job inputs
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | The type of job input. Specify `mltable` for input data that points to a location where has the mltable meta file, or `uri_folder` for input data that points to a folder source. | `mltable`, `uri_folder` | `uri_folder` |
+| `path` | string | The path to the data to use as input. The value can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information, see [Core yaml syntax](reference-yaml-core-syntax.md) on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. It will pass in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
+
+### Job outputs
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
+
+### Predefined arguments for parallel job
+| Key | Description | Allowed values | Default value |
+| | -- | -- | - |
+| `--error_threshold` | The threshold of **failed items**. Failed items are counted by the number gap between inputs and returns from each mini-batch. If the sum of failed items is higher than this threshold, the parallel job will be marked as failed.<br><br>Note: "-1" is the default number, which means to ignore all failures during parallel job.| [-1, int.max] | -1 |
+| `--allowed_failed_percent` | Similar to `mini_batch_error_threshold` but uses the percent of failed mini-batches instead of the count. | [0, 100] | 100 |
+| `--task_overhead_timeout` | The timeout in second for initialization of each mini-batch. For example, load mini-batch data and pass it to run() function. | (0, 259200] | 30 |
+| `--progress_update_timeout` | The timeout in second for monitoring the progress of mini-batch execution. If no progress updates receive within this timeout setting, the parallel job will be marked as failed. | (0, 259200] | Dynamically calculated by other settings. |
+| `--first_task_creation_timeout` | The timeout in second for monitoring the time between the job start to the run of first mini-batch. | (0, 259200] | 600 |
+| `--copy_logs_to_parent` | Boolean option to whether copy the job progress, overview, and logs to the parent pipeline job. | True, False | False |
+| `--metrics_name_prefix` | Provide the custom prefix of your metrics in this parallel job. | | |
+| `--push_metrics_to_parent` | Boolean option to whether push metrics to the parent pipeline job. | True, False | False |
+| `--resource_monitor_interval` | The time interval in seconds to dump node resource usage(for example, cpu, memory) to log folder under "logs/sys/perf" path. <br><br> Note: Frequent dump resource logs will slightly slow down the execution speed of your mini-batch. Set this value to "0" to stop dumping resource usage. | [0, int.max] | 600 |
+
+## Remarks
+
+The `az ml job` commands can be used for managing Azure Machine Learning jobs.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). Several are shown below.
+
+## YAML: Using parallel job in pipeline
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-overview.md
The Azure Machine Learning CLI (v2), an extension to the Azure CLI, often uses a
| Reference | URI | | - | - |
-| [Online (real-time)](reference-yaml-endpoint-online.md) | https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json |
+| [Managed online (real-time)](reference-yaml-endpoint-online.md) | https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json |
+| [Kubernetes online (real-time)](reference-yaml-endpoint-online.md) | https://azuremlschemas.azureedge.net/latest/kubernetesOnlineEndpoint.schema.json |
| [Batch](reference-yaml-endpoint-batch.md) | https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json | ## Deployment
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
Previously updated : 06/22/2022 Last updated : 09/27/2022 # Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus)
These metrics are available for Hyperscale (Citus) nodes:
|memory_percent|Memory percent|Percent|The percentage of memory in use.| |network_bytes_ingress|Network In|Bytes|Network In across active connections.| |network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|replication_lag|Replication Lag|Seconds|How far read replica nodes are behind their counterparts in the primary cluster.|
|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.| |storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-read-replicas.md
Previously updated : 06/17/2022 Last updated : 09/27/2022 # Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus)
psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
At the prompt, enter the password for the user account.
+## Replica promotion to independent server group
+
+You can promote a replica to an independent server group that is readable and
+writable. A promoted replica no longer receives updates from its original, and
+promotion can't be undone. Promoted replicas can have replicas of their own.
+
+There are two common scenarios for promoting a replica:
+
+1. **Disaster recovery.** If something goes wrong with the primary, or with an
+ entire region, you can open another server group for writes as an emergency
+ procedure.
+2. **Migrating to another region.** If you want to move to another region,
+ create a replica in the new region, wait for data to catch up, then promote
+ the replica. To avoid potentially losing data during promotion, you may want
+ to disable writes to the original server group after the replica catches up.
+
+ You can see how far a replica has caught up using the `replication_lag`
+ metric. See [metrics](concepts-monitoring.md#metrics) for more information.
+ ## Considerations This section summarizes considerations about the read replica feature.
upscale it on the primary.
Firewall rules and parameter settings aren't inherited from the primary server to the replica when the replica is created or afterwards.
-### Cross-region replication (preview)
+### Cross-region replication
Read replicas can be created in the region of the primary server group, or in any other [region supported by Hyperscale (Citus)](resources-regions.md). The
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-read-replicas-portal.md
Previously updated : 06/17/2022 Last updated : 09/27/2022 # Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
To create a read replica, follow these steps:
4. Enter a name for the read replica.
-5. Select a value from the **Location (preview)** drop-down.
+5. Select a value from the **Location** drop-down.
6. Select **OK** to confirm the creation of the replica.
steps:
3. Enter the name of the primary server group to delete. Select **Delete** to confirm deletion of the primary server group.
+## Promote a replica to an independent server group
+
+To promote a server group replica, follow these steps:
+
+1. In the Azure portal, open the **Replication** page for your server group.
+
+2. Select the **Promote** icon for the desired replica.
+
+3. Select the checkbox indicating you understand the action is irreversible.
+
+4. Select **Promote** to confirm.
## Delete a replica
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/product-updates.md
Previously updated : 07/11/2022 Last updated : 09/27/2022 # Product updates for PostgreSQL - Hyperscale (Citus)
Here are the features currently available for preview:
session and object audit logging via the standard PostgreSQL logging facility. It produces audit logs required to pass certain government, financial, or ISO certification audits.
-* **[Cross-region
- replication](concepts-read-replicas.md#cross-region-replication-preview)**.
- Create asynchronous read replicas for a server group in different regions.
## Contact us
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab."::: 1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected in [Collect SIM values](collect-required-information-for-private-mobile-network.md#collect-sim-values).
-
+ - If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
- - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row.
+ - If you select **Add manually**, a new **Add SIM** button will appear under **Enter SIM profile configurations**. Select it, fill out the fields with the correct settings for the first SIM you want to provision, and select **Add SIM**. Repeat this process for every additional SIM you want to provision.
+
+ :::image type="content" source="media/add-sim-manually.png" alt-text="Screenshot of the Azure portal showing the Add SIM screen.":::
+ - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file. :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
To create a new SIM group:
1. Select **Next: SIMs**. 1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected for your SIMs.
-
+ - If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
- - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row.
+ - If you select **Add manually**, a new **Add SIM** button will appear under **Enter SIM profile configurations**. Select it, fill out the fields with the correct settings for the first SIM you want to provision, and select **Add SIM**. Repeat this process for every additional SIM you want to provision.
+
+ :::image type="content" source="media/add-sim-manually.png" alt-text="Screenshot of the Azure portal showing the Add SIM screen.":::
+ - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file. :::image type="content" source="media/manage-sim-groups/create-sim-group-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-link-azure-data-factory.md
This document explains the steps required for connecting an Azure Data Factory a
## View existing Data Factory connections
-Multiple Azure Data Factories can connect to a single Microsoft Purview to push lineage information. The current limit allows you to connect up 10 Data Factory accounts at a time from the Microsoft Purview management center. To show the list of Data Factory accounts connected to your Microsoft Purview account, do the following:
+Multiple Azure Data Factories can connect to a single Microsoft Purview to push lineage information. The current limit allows you to connect up to 10 Data Factory accounts at a time from the Microsoft Purview management center. To show the list of Data Factory accounts connected to your Microsoft Purview account, do the following:
1. Select **Management** on the left navigation pane. 2. Under **Lineage connections**, select **Data Factory**.
Follow the steps below to connect an existing data factory to your Microsoft Pur
:::image type="content" source="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png" alt-text="Screenshot showing warning to disconnect Azure Data Factory."::: >[!Note]
->We support adding 10 Data Factory at once. If you want to add more than 10 Data Factory, please do so in multiple batches of 10 Data Factory.
+>We support adding up to 10 Azure Data Factory accounts at once. If you want to add more than 10 data factory accounts, do so in multiple batches.
### How authentication works
role-based-access-control Conditions Custom Security Attributes Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes-example.md
There are several access control mechanisms that you could use to provide access
Access keys are a common way to provide access to data plane resources. Access keys provide read, write, and delete permissions to whoever possesses the access key. This means attackers can get access to your sensitive data if they can get your access keys. Access keys do not have identity binding, do not have an expiration, and are a security risk to store.
-Like access keys, shared access signature (SAS) tokens do not have identity binding, but expire on a regularly basis. The lack of identity binding represents the same security risks as access keys do. You must manage the expiration to ensure that clients do not get errors. SAS tokens require additional code to manage and operate daily and can be a significant overhead for a DevOps team.
+Like access keys, shared access signature (SAS) tokens do not have identity binding, but expire on a regular basis. The lack of identity binding represents the same security risks as access keys do. You must manage the expiration to ensure that clients do not get errors. SAS tokens require additional code to manage and operate daily and can be a significant overhead for a DevOps team.
Azure RBAC provides centralized fine-grained access control. Azure RBAC has identity binding that reduces your security risk. Using conditions you can potentially scale the management of role assignments and make access control easier to maintain because access is based on flexible and dynamic attributes.
If you have a similar scenario, follow these steps to see if you could potential
To use this solution, you must have: -- Multiple built-in or custom role assignments that have [storage blob data actions](../storage/blobs/storage-auth-abac-attributes.md). These include the following built-in roles:
+- Multiple built-in or custom role assignments that have [blob storage data actions](../storage/blobs/storage-auth-abac-attributes.md). These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner)
role-based-access-control Conditions Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes.md
You can also use Azure CLI to add role assignments conditions. The following com
- [What are custom security attributes in Azure AD? (Preview)](../active-directory/fundamentals/custom-security-attributes-overview.md) - [Azure role assignment condition format and syntax (preview)](conditions-format.md)-- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md?toc=/azure/role-based-access-control/toc.json)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md?toc=/azure/role-based-access-control/toc.json)
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Previously updated : 07/21/2022 Last updated : 09/28/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
Currently, conditions can be added to built-in or custom role assignments that h
- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender) - [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
-For a list of the blob storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
+For a list of the storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md) and [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md).
## Attributes
Depending on the selected actions, the attribute might be found in different pla
For a list of the blob storage or queue storage attributes you can use in conditions, see: -- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md)
#### Principal attributes
a AND (b OR c)
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Add or edit Azure role assignment conditions using the Azure portal (preview)](conditions-role-assignments-portal.md)
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Azure ABAC builds on Azure RBAC by adding role assignment conditions based on at
There are three primary benefits for using role assignment conditions: -- **Provide more fine-grained access control** - A role assignment uses a role definition with actions and data actions to grant a security principal permissions. You can write conditions to filter down those permissions for more fine-grained access control. You can also add conditions to specific actions. For example, you can grant John read access to blobs in your subscription only if the blobs are tagged as Project=Blue.
+- **Provide more fine-grained access control** - A role assignment uses a role definition with actions and data actions to grant security principal permissions. You can write conditions to filter down those permissions for more fine-grained access control. You can also add conditions to specific actions. For example, you can grant John read access to blobs in your subscription only if the blobs are tagged as Project=Blue.
- **Help reduce the number of role assignments** - Each Azure subscription currently has a role assignment limit. There are scenarios that would require thousands of role assignments. All of those role assignments would have to be managed. In these scenarios, you could potentially add conditions to use significantly fewer role assignments. - **Use attributes that have specific business meaning** - Conditions allow you to use attributes that have specific business meaning to you in access control. Some examples of attributes are project name, software development stage, and classification levels. The values of these resource attributes are dynamic and change as users move across teams and projects.
There are several scenarios where you might want to add a condition to your role
- Read access to blobs with the tag Program=Alpine and a path of logs - Read access to blobs with the tag Project=Baker and the user has a matching attribute Project=Baker
-For more information about how to create these examples, see [Examples of Azure role assignment conditions](../storage/blobs/storage-auth-abac-examples.md).
+For more information about how to create these examples, see [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md).
## Where can conditions be added?
Here are the known issues with conditions:
## Next steps - [FAQ for Azure role assignment conditions (preview)](conditions-faq.md)-- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
For more information about custom security attributes, see:
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/blobs/storage-auth-abac-cli.md) - [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-portal.md
Previously updated : 05/16/2022 Last updated : 09/28/2022
For information about the prerequisites to add or edit role assignment condition
## Step 1: Determine the condition you need
-To determine the conditions you need, review the examples in [Example Azure role assignment conditions](../storage/blobs/storage-auth-abac-examples.md).
+To determine the conditions you need, review the examples in [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md).
-Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](../storage/blobs/storage-auth-abac-attributes.md). These include the following built-in roles:
+Currently, conditions can be added to built-in or custom role assignments that have [blob storage data actions](../storage/blobs/storage-auth-abac-attributes.md) or [queue storage data actions](../storage/queues/queues-auth-abac-attributes.md). These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner) - [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader)
+- [Storage Queue Data Contributor](built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
## Step 2: Choose how to add condition
Once you have the Add role assignment condition page open, you can review the ba
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md) - [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-powershell.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/blobs/storage-auth-abac-powershell.md) - [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-rest.md
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md) - [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-template.md
az deployment group create --resource-group example-group --template-file rbac-t
## Next steps -- [Example Azure role assignment conditions (preview)](../storage/blobs/storage-auth-abac-examples.md)
+- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)
- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md) - [Assign Azure roles using Azure Resource Manager templates](role-assignments-template.md)
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
Previously updated : 05/16/2022 Last updated : 09/28/2022 #Customer intent:
If your role assignment has multiple actions that grant a permission, ensure tha
**Cause 3**
-When you add a condition to a role assignment, it can take up to 5 minutes for the condition to be enforced. When you add a condition, resource providers (such as Microsoft.Storage) are notified of the update. Resource providers make updates to their local caches immediately to ensure that they have the latest role assignments. This process completes in 1 or 2 minutes, but can take up to 5 minutes.
+When you add a condition to a role assignment, it can take up to 5 minutes for the condition to be enforced. When you add a condition, resource providers (such as Microsoft Storage) are notified of the update. Resource providers make updates to their local caches immediately to ensure that they have the latest role assignments. This process completes in 1 or 2 minutes, but can take up to 5 minutes.
**Solution 3**
The previously selected attribute no longer applies to the currently selected ac
**Solution 1**
-In the **Add action** section, select an action that applies to the selected attribute. For a list of storage actions that each storage attribute supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
+In the **Add action** section, select an action that applies to the selected attribute. For a list of storage actions that each storage attribute supports, see [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md) and [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md).
**Solution 2**
-In the **Build expression** section, select an attribute that applies to the currently selected actions. For a list of storage attributes that each storage action supports, see [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md).
+In the **Build expression** section, select an attribute that applies to the currently selected actions. For a list of storage attributes that each storage action supports, see [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md) and [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md).
### Symptom - Attribute does not apply in this context warning
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 08/26/2022 Last updated : 09/28/2022
If you need to assign administrator roles in Azure Active Directory, see [Assign
[!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
-#### [Current](#tab/current/)
- ## Step 1: Identify the needed scope [!INCLUDE [Scope for Azure RBAC introduction](../../includes/role-based-access-control/scope-intro.md)] For more information, see [Understand scope](scope-overview.md).
Currently, conditions can be added to built-in or custom role assignments that h
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner) - [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader)
+- [Storage Queue Data Contributor](built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
1. Click **Add condition** if you want to further refine the role assignments based on storage blob attributes. For more information, see [Add or edit Azure role assignment conditions](conditions-role-assignments-portal.md).
Currently, conditions can be added to built-in or custom role assignments that h
1. If you don't see the description for the role assignment, click **Edit columns** to add the **Description** column.
-#### [Classic](#tab/classic/)
-
-## Step 1: Identify the needed scope (classic)
--
-![Diagram that shows the scope levels for Azure RBAC for classic experience.](../../includes/role-based-access-control/media/scope-levels.png)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
-
-1. Click the specific resource for that scope.
-
- The following shows an example resource group.
-
- ![Screenshot of resource group overview page for classic experience.](./media/shared/rg-overview.png)
-
-## Step 2: Open the Add role assignment pane (classic)
-
-**Access control (IAM)** is the page that you typically use to assign roles to grant access to Azure resources. It's also known as identity and access management (IAM) and appears in several locations in the Azure portal.
-
-1. Click **Access control (IAM)**.
-
- The following shows an example of the Access control (IAM) page for a resource group.
-
- ![Screenshot of Access control (IAM) page for a resource group for classic experience.](./media/shared/rg-access-control.png)
-
-1. Click the **Role assignments** tab to view the role assignments at this scope.
-
-1. Click **Add** > **Add role assignment**.
- If you don't have permissions to assign roles, the Add role assignment option will be disabled.
-
- ![Screenshot of Add > Add role assignment menu for classic experience.](./media/shared/add-role-assignment-menu.png)
-
-1. On the Add role assignment page, click **Use classic experience**.
-
- ![Screenshot of Add role assignment page with Use classic experience link for classic experience.](./media/role-assignments-portal/add-role-assignment-page-use-classic.png)
-
- The Add role assignment pane opens.
-
- ![Screenshot of Add role assignment page with Role, Assign access to, and Select options for classic experience.](./media/role-assignments-portal/add-role-assignment-page.png)
-
-## Step 3: Select the appropriate role (classic)
-
-1. In the **Role** list, search or scroll to find the role that you want to assign.
-
- To help you determine the appropriate role, you can hover over the info icon to display a description for the role. For additional information, you can view the [Azure built-in roles](built-in-roles.md) article.
-
- ![Screenshot of Select a role list in Add role assignment for classic experience.](./media/role-assignments-portal/add-role-assignment-role.png)
-
-1. Click to select the role.
-
-## Step 4: Select who needs access (classic)
-
-1. In the **Assign access to** list, select the type of security principal to assign access to.
-
- | Type | Description |
- | | |
- | **User, group, or service principal** | If you want to assign the role to a user, group, or service principal (application), select this type. |
- | **User-assigned managed identity** | If you want to assign the role to a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), select this type. |
- | *System-assigned managed identity* | If you want to assign the role to a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), select the Azure service instance where the managed identity is located. |
-
- ![Screenshot of selecting a security principal in Add role assignment for classic experience.](./media/role-assignments-portal/add-role-assignment-type.png)
-
-1. If you selected a user-assigned managed identity or a system-assigned managed identity, select the **Subscription** where the managed identity is located.
-
-1. In the **Select** section, search for the security principal by entering a string or scrolling through the list.
-
- ![Screenshot of selecting a user in Add role assignment for classic experience.](./media/role-assignments-portal/add-role-assignment-user.png)
-
-1. Once you have found the security principal, click to select it.
-
-## Step 5: Assign role (classic)
-
-1. To assign the role, click **Save**.
-
- After a few moments, the security principal is assigned the role at the selected scope.
-
-1. On the **Role assignments** tab, verify that you see the role assignment in the list.
-
- ![Screenshot of role assignment list after assigning role for classic experience.](./media/role-assignments-portal/rg-role-assignments.png)
--- ## Next steps - [Assign a user as an administrator of an Azure subscription](role-assignments-portal-subscription-admin.md)
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Title: "Quickstart: Create a search index in the Azure portal"
-description: Create, load, and query your first search index using the Import Data wizard in the Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
+description: Create, load, and query your first search index using the Import Data wizard in Azure portal. This quickstart uses a fictitious hotel dataset for sample data.
# Quickstart: Create an Azure Cognitive Search index in the Azure portal
-In this quickstart, you will create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
+In this quickstart, you'll create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [Quickstart: Create a skillset](cognitive-search-quickstart-blob.md).
Many customers start with the free service. The free tier is limited to three in
Check the service overview page to find out how many indexes, indexers, and data sources you already have.
-## Create an index and load data
+## Create and load an index
-Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and additional constructs that optimize certain search behaviors.
+Search queries iterate over an [*index*](search-what-is-an-index.md) that contains searchable data, metadata, and other constructs that optimize certain search behaviors.
For this tutorial, we use a built-in sample dataset that can be crawled using an [*indexer*](search-indexer-overview.md) via the [**Import data** wizard](search-import-data-portal.md). An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are used programmatically, but in the portal, you can access them through the **Import data** wizard.
For this tutorial, we use a built-in sample dataset that can be crawled using an
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to create and populate a search index.
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create and populate a search index.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
+ :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command in the command bar." border="true":::
-1. In the wizard, click **Connect to your data** > **Samples** > **hotels-sample**. This data source is built-in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an "existing data source" that can be reused in other import operations.
+1. In the wizard, select **Connect to your data** > **Samples** > **hotels-sample**. This data source is built-in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an "existing data source" that can be reused in other import operations.
- :::image type="content" source="media/search-get-started-portal/import-datasource-sample.png" alt-text="Select sample dataset":::
+ :::image type="content" source="media/search-get-started-portal/import-datasource-sample.png" alt-text="Screenshot of the select sample dataset page in the wizard.":::
1. Continue to the next page.
The wizard supports the creation of an [AI enrichment pipeline](cognitive-search
We'll skip this step for now, and move directly on to **Customize target index**.
- :::image type="content" source="media/search-get-started-portal/skip-cog-skill-step.png" alt-text="Skip cognitive skill step":::
+ :::image type="content" source="media/search-get-started-portal/skip-cog-skill-step.png" alt-text="Screenshot of the Skip cognitive skill button in the wizard.":::
> [!TIP] > You can step through an AI-indexing example in a [quickstart](cognitive-search-quickstart-blob.md) or [tutorial](cognitive-search-tutorial-blob.md). ### Step 3 - Configure index
-For the built-in hotels sample index, a default index schema is defined for you. With the exception of a few advanced filter examples, queries in the documentation and samples that target the hotel-samples index will run on this index definition:
+For the built-in hotels sample index, a default index schema is defined for you. Except for a few advanced filter examples, queries in the documentation and samples that target the hotel-samples index will run on this index definition:
Typically, in a code-based exercise, index creation is completed prior to loading data. The Import data wizard condenses these steps by generating a basic index for any data source it can crawl. Minimally, an index requires a name and a fields collection; one of the fields should be marked as the document key to uniquely identify each document. Additionally, you can specify language analyzers or suggesters if you want autocomplete or suggested queries.
By default, the wizard scans the data source for unique identifiers as the basis
### Step 4 - Configure indexer
-Still in the **Import data** wizard, click **Indexer** > **Name**, and type a name for the indexer.
+Still in the **Import data** wizard, select **Indexer** > **Name**, and type a name for the indexer.
This object defines an executable process. You could put it on recurring schedule, but for now use the default option to run the indexer once, immediately.
-Click **Submit** to create and simultaneously run the indexer.
+Select **Submit** to create and simultaneously run the indexer.
- :::image type="content" source="media/search-get-started-portal/hotels-indexer.png" alt-text="hotels indexer":::
+ :::image type="content" source="media/search-get-started-portal/hotels-indexer.png" alt-text="Screenshot of the hotels indexer definition in the wizard.":::
## Monitor progress
-The wizard should take you to the Indexers list where you can monitor progress. For self-navigation, go to the Overview page and click the **Indexers** tab.
+The wizard should take you to the Indexers list where you can monitor progress. For self-navigation, go to the Overview page and select the **Indexers** tab.
It can take a few minutes for the portal to update the page, but you should see the newly created indexer in the list, with status indicating "in progress" or success, along with the number of documents indexed.
- :::image type="content" source="media/search-get-started-portal/indexers-inprogress.png" alt-text="Indexer progress message":::
+ :::image type="content" source="media/search-get-started-portal/indexers-inprogress.png" alt-text="Screenshot of the indexer progress message in the wizard.":::
-## View the index
+## Check results
-The service overview page provides links to the resources created in your Azure Cognitive Search service. To view the index you just created, click **Indexes** from the list of links.
+The service overview page provides links to the resources created in your Azure Cognitive Search service. To view the index you just created, select **Indexes** from the list of links.
Wait for the portal page to refresh. After a few minutes, you should see the index with a document count and storage size.
- :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Indexes list on the service dashboard":::
+ :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the service dashboard.":::
-From this list, you can click on the *hotels-sample* index that you just created, view the index schema. and optionally add new fields.
+From this list, you can select on the *hotels-sample* index that you just created, view the index schema. and optionally add new fields.
The **Fields** tab shows the index schema. If you're writing queries and need to check whether a field is filterable or sortable, this tab shows you the attributes. Scroll to the bottom of the list to enter a new field. While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service and are thus non-modifiable, not even in code. To fundamentally change an existing field, create a new index, dropping the original.
- :::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="sample index definition":::
+ :::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="Screenshot of the sample index definition in Azure portal.":::
Other constructs, such as scoring profiles and CORS options, can be added at any time.
You now have a search index that can be queried using [**Search explorer**](sear
1. Select **Search explorer** on the command bar.
- :::image type="content" source="medi.png" alt-text="Search explorer command":::
+ :::image type="content" source="medi.png" alt-text="Screenshot of the Search Explorer command on the command bar.":::
1. From **Index**, choose "hotels-sample-index".
- :::image type="content" source="media/search-get-started-portal/search-explorer-changeindex.png" alt-text="Index and API commands":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-changeindex.png" alt-text="Screenshot of the Index and API selection lists in Search Explorer.":::
1. In the search bar, paste in a query string from the examples below and select **Search**.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-string-example.png" alt-text="Query string and search button":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-string-example.png" alt-text="Screenshot of the query string text field and search button in Search Explorer.":::
-## Example queries
+## Run more example queries
All of the queries in this section are designed for **Search Explorer** and the Hotels sample index. Results are returned as verbose JSON documents. All fields marked as "retrievable" in the index can appear in results. For more information about queries, see [Querying in Azure Cognitive Search](search-query-overview.md).
If you're using a free service, remember that the limit is three indexes, indexe
## Next steps
-Use a portal wizard to generate a ready-to-use web app that runs in a browser. You can try this wizard out on the small index you just created, or use one of the built-in sample data sets for a richer search experience.
+Use a portal wizard to generate a ready-to-use web app that runs in a browser. You can try out this wizard on the small index you just created, or use one of the built-in sample data sets for a richer search experience.
> [!div class="nextstepaction"] > [Create a demo app in the portal](search-create-app-portal.md)
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
Actions can be defined to run when the conditions (see above) are met. You can d
Also, you can define an action to [**run a playbook**](tutorial-respond-threats-playbook.md), in order to take more complex response actions, including any that involve external systems. The playbooks available to be used in an automation rule depend on the [**trigger**](automate-responses-with-playbooks.md#azure-logic-apps-basic-concepts) on which the playbooks *and* the automation rule are based: Only incident-trigger playbooks can be run from incident-trigger automation rules, and only alert-trigger playbooks can be run from alert-trigger automation rules. You can define multiple actions that call playbooks, or combinations of playbooks and other actions. Actions will run in the order in which they are listed in the rule.
-Playbooks using [either version of Logic Apps (Standard or Consumption)](automate-responses-with-playbooks.md#two-types-of-logic-apps) will be available to run from automation rules.
+Playbooks using [either version of Azure Logic Apps (Standard or Consumption)](automate-responses-with-playbooks.md#logic-app-types) will be available to run from automation rules.
### Expiration date
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
# Automate threat response with playbooks in Microsoft Sentinel - This article explains what Microsoft Sentinel playbooks are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, achieving better results while saving time and resources. ## What is a playbook?
Technically, a playbook template is an [ARM template](../azure-resource-manager/
### Azure Logic Apps basic concepts
-Playbooks in Microsoft Sentinel are based on workflows built in [Azure Logic Apps](../logic-apps/logic-apps-overview.md), a cloud service that helps you schedule, automate, and orchestrate tasks and workflows across systems throughout the enterprise. This means that playbooks can take advantage of all the power and customizability of Logic Apps' built-in templates.
+Playbooks in Microsoft Sentinel are based on workflows built in [Azure Logic Apps](../logic-apps/logic-apps-overview.md), a cloud service that helps you schedule, automate, and orchestrate tasks and workflows across systems throughout the enterprise. This means that playbooks can take advantage of all the power and capabilities of the built-in templates in Azure Logic Apps.
> [!NOTE]
-> Because Azure Logic Apps are a separate resource, additional charges may apply. Visit the [Azure Logic Apps](https://azure.microsoft.com/pricing/details/logic-apps/) pricing page for more details.
+> Azure Logic Apps creates separate resources, so additional charges might apply. For more information, visit the [Azure Logic Apps pricing page](https://azure.microsoft.com/pricing/details/logic-apps/).
Azure Logic Apps communicates with other systems and services using connectors. The following is a brief explanation of connectors and some of their important attributes: -- **Managed Connector:** A set of actions and triggers that wrap around API calls to a particular product or service. Azure Logic Apps offers hundreds of connectors to communicate with both Microsoft and non-Microsoft services.
- - [List of all Logic Apps connectors and their documentation](/connectors/connector-reference/)
+- **Managed connector:** A set of actions and triggers that wrap around API calls to a particular product or service. Azure Logic Apps offers hundreds of connectors to communicate with both Microsoft and non-Microsoft services. For more information, see [Azure Logic Apps connectors and their documentation](/connectors/connector-reference/connector-reference-logicapps-connectors)
-- **Custom connector:** You may want to communicate with services that aren't available as prebuilt connectors. Custom connectors address this need by allowing you to create (and even share) a connector and define its own triggers and actions.
- - [Create your own custom Logic Apps connectors](/connectors/custom-connectors/create-logic-apps-connector)
+- **Custom connector:** You might want to communicate with services that aren't available as prebuilt connectors. Custom connectors address this need by allowing you to create (and even share) a connector and define its own triggers and actions. For more information, see [Create your own custom Azure Logic Apps connectors](/connectors/custom-connectors/create-logic-apps-connector).
-- **Microsoft Sentinel Connector:** To create playbooks that interact with Microsoft Sentinel, use the Microsoft Sentinel connector.
- - [Microsoft Sentinel connector documentation](/connectors/azuresentinel/)
+- **Microsoft Sentinel connector:** To create playbooks that interact with Microsoft Sentinel, use the Microsoft Sentinel connector. For more information, see the [Microsoft Sentinel connector documentation](/connectors/azuresentinel/).
-- **Trigger:** A connector component that starts a playbook. It defines the schema that the playbook expects to receive when triggered. The Microsoft Sentinel connector currently has two triggers:
- - [Alert trigger](/connectors/azuresentinel/#triggers): the playbook receives the alert as its input.
- - [Incident trigger](/connectors/azuresentinel/#triggers): the playbook receives the incident as its input, along with all its included alerts and entities.
+- **Trigger:** A connector component that starts a workflow, in this case, a playbook. The Microsoft Sentinel trigger defines the schema that the playbook expects to receive when triggered. The Microsoft Sentinel connector currently has two triggers:
+ - [Alert trigger](/connectors/azuresentinel/#triggers): The playbook receives the alert as input.
+ - [Incident trigger](/connectors/azuresentinel/#triggers): The playbook receives the incident as input, along with all the included alerts and entities.
- **Actions:** Actions are all the steps that happen after the trigger. They can be arranged sequentially, in parallel, or in a matrix of complex conditions. - **Dynamic fields:** Temporary fields, determined by the output schema of triggers and actions and populated by their actual output, that can be used in the actions that follow.
-#### Two types of Logic Apps
+#### Logic app types
+
+Microsoft Sentinel now supports the following logic app resource types:
-Microsoft Sentinel now supports two Logic Apps resource types:
+- **Consumption**, which runs in multi-tenant Azure Logic Apps and uses the classic, original Azure Logic Apps engine.
+- **Standard**, which runs in single-tenant Azure Logic Apps and uses a redesigned Azure Logic Apps engine.
-- **Logic App (Consumption)**, based on the classic, original Logic Apps engine, and-- **Logic App (Standard)**, based on the new Logic Apps engine.
+The **Standard** logic app type offers higher performance, fixed pricing, multiple workflow capability, easier API connections management, native network capabilities such as support for virtual networks and private endpoints (see note below), built-in CI/CD features, better Visual Studio Code integration, an updated workflow designer, and more.
-**Logic Apps Standard** features a single-tenant, containerized environment that provides higher performance, fixed pricing, single apps containing multiple workflows, easier API connections management, native network capabilities such as virtual networking (VNet) and private endpoints support, built-in CI/CD features, better Visual Studio integration, a new version of the Logic Apps Designer, and more.
+To use this logic app version, create new Standard playbooks in Microsoft Sentinel (see note below). You can use these playbooks in the same ways that you use Consumption playbooks:
-You can leverage this powerful new version of Logic Apps by creating new Standard playbooks in Microsoft Sentinel, and you can use them the same ways you use the classic Logic App Consumption playbooks:
- Attach them to automation rules and/or analytics rules. - Run them on demand, from both incidents and alerts. - Manage them in the Active Playbooks tab.
-There are many differences between these two resource types, some of which affect some of the ways they can be used in playbooks in Microsoft Sentinel. In such cases, the documentation will point out what you need to know.
-
-See [Resource type and host environment differences](../logic-apps/logic-apps-overview.md#resource-type-and-host-environment-differences) in the Logic Apps documentation for a detailed summary of the two resource types.
- > [!NOTE]
-> - You'll notice an indicator in Standard workflows that presents as either *stateful* or *stateless*. Microsoft Sentinel does not support stateless workflows at this time. Learn about the differences between [**stateful and stateless workflows**](../logic-apps/single-tenant-overview-compare.md#stateful-and-stateless-workflows).
-> - Logic Apps Standard does not currently support Playbook templates. This means that you can't create a Standard workflow from within Microsoft Sentinel. Rather, you must create it in Logic Apps, and once it's created, you'll see it in Microsoft Sentinel.
+>
+> - Standard workflows currently don't support Playbook templates, which means you can't create a Standard workflow-based playbook directly in Microsoft Sentinel. Instead, you must create the workflow in Azure Logic Apps. After you've created the workflow, it appears as a playbook in Microsoft Sentinel.
+>
+> - Although Standard workflows support private endpoints as mentioned above, Microsoft Sentinel doesn't currently support the use of private endpoints in playbooks, even those based on Standard workflows.
+> Workflows with private endpoints might still be visible and selectable when you're choosing a playbook from a list in Microsoft Sentinel (whether to run manually, to add to an automation rule, or in the playbooks gallery), and you'll be able to select them, but their execution will fail.
+>
+> - An indicator identifies Standard workflows as either *stateful* or *stateless*. Microsoft Sentinel doesn't support stateless workflows at this time. Learn about the differences between [**stateful and stateless workflows**](../logic-apps/single-tenant-overview-compare.md#stateful-and-stateless-workflows).
+
+There are many differences between these two resource types, some of which affect some of the ways they can be used in playbooks in Microsoft Sentinel. In such cases, the documentation will point out what you need to know. For more information, see [Resource type and host environment differences](../logic-apps/logic-apps-overview.md#resource-environment-differences) in the Azure Logic Apps documentation.
### Permissions required
- To give your SecOps team the ability to use Logic Apps to create and run playbooks in Microsoft Sentinel, assign Azure roles to your security operations team or to specific users on the team. The following describes the different available roles, and the tasks for which they should be assigned:
+ To give your SecOps team the ability to use Azure Logic Apps to create and run playbooks in Microsoft Sentinel, assign Azure roles to your security operations team or to specific users on the team. The following describes the different available roles, and the tasks for which they should be assigned:
-#### Azure roles for Logic Apps
+#### Azure roles for Azure Logic Apps
- **Logic App Contributor** lets you manage logic apps and run playbooks, but you can't change access to them (for that you need the **Owner** role).-- **Logic App Operator** lets you read, enable, and disable logic apps, but you can't edit or update them.
+- **Logic App Operator** lets you read, enable, and disable logic apps, but you can't edit or update them.
-#### Azure roles for Sentinel
+#### Azure roles for Microsoft Sentinel
- **Microsoft Sentinel Contributor** role lets you attach a playbook to an analytics rule. - **Microsoft Sentinel Responder** role lets you run a playbook manually.
See [Resource type and host environment differences](../logic-apps/logic-apps-ov
- [Define the automation scenario](#use-cases-for-playbooks). -- [Build the Azure Logic App](tutorial-respond-threats-playbook.md).
+- [Build your logic app](tutorial-respond-threats-playbook.md).
-- [Test your Logic App](#run-a-playbook-manually).
+- [Test your logic app](#run-a-playbook-manually).
- Attach the playbook to an [automation rule](#incident-creation-automated-response) or an [analytics rule](#alert-creation-automated-response), or [run manually when required](#run-a-playbook-manually).
In either of these panels, you'll see two tabs: **Playbooks** and **Runs**.
- In the **Playbooks** tab, you'll see a list of all the playbooks that you have access to and that use the appropriate trigger - the **Microsoft Sentinel Incident** trigger for incident playbooks and the **Microsoft Sentinel Alert** trigger for alert playbooks. Each playbook in the list has a **Run** button which you select to run the playbook immediately. If you want to run an incident-trigger playbook that you don't see in the list, [see the note about Microsoft Sentinel permissions above](#incident-creation-automated-response). -- In the **Runs** tab, you'll see a list of all the times any playbook has been run on the incident or alert you selected. It might take a few seconds for any just-completed run to appear in this list. Selecting a specific run will open the full run log in Logic Apps.
+- In the **Runs** tab, you'll see a list of all the times any playbook has been run on the incident or alert you selected. It might take a few seconds for any just-completed run to appear in this list. Selecting a specific run will open the full run log in Azure Logic Apps.
## Manage your playbooks In the **Active playbooks** tab, there appears a list of all the playbooks which you have access to, filtered by the subscriptions which are currently displayed in Azure. The subscriptions filter is available from the **Directory + subscription** menu in the global page header.
-Clicking on a playbook name directs you to the playbook's main page in Logic Apps. The **Status** column indicates if it is enabled or disabled.
+Clicking on a playbook name directs you to the playbook's main page in Azure Logic Apps. The **Status** column indicates if it is enabled or disabled.
The **Plan** column indicates whether the playbook uses the **Standard** or **Consumption** resource type in Azure Logic Apps. You can filter the list by plan type to see only one type of playbook. You'll notice that playbooks of the Standard type use the `LogicApp/Workflow` naming convention. This convention reflects the fact that a Standard playbook represents a workflow that exists *alongside other workflows* in a single Logic App.
-**Trigger kind** represents the Logic Apps trigger that starts this playbook.
+**Trigger kind** represents the Azure Logic Apps trigger that starts this playbook.
| Trigger kind | Indicates component types in playbook | |-|-|
The **Plan** column indicates whether the playbook uses the **Standard** or **Co
| **Not initialized** | The playbook has been created, but contains no components (triggers or actions). | |
-In the playbook's Logic App page, you can see more information about the playbook, including a log of all the times it has run, and the result (success or failure, and other details). You can also enter the Logic Apps Designer and edit the playbook directly, if you have the appropriate permissions.
+In the playbook's Azure Logic Apps page, you can see more information about the playbook, including a log of all the times it has run, and the result (success or failure, and other details). You can also open the workflow designer in Azure Logic Apps, and edit the playbook directly, if you have the appropriate permissions.
### API connections
-API connections are used to connect Logic Apps to other services. Every time a new authentication to a Logic Apps connector is made, a new resource of type **API connection** is created, and contains the information provided when configuring access to the service.
+API connections are used to connect Azure Logic Apps to other services. Every time a new authentication is made for a connector in Azure Logic Apps, a new resource of type **API connection** is created, and contains the information provided when configuring access to the service.
To see all the API connections, enter *API connections* in the header search box of the Azure portal. Note the columns of interest: - Display name - the "friendly" name you give to the connection every time you create one. - Status - indicates the connection status: error, connected.-- Resource group - API connections are created in the resource group of the playbook (Logic Apps) resource.
+- Resource group - API connections are created in the resource group of the playbook (Azure Logic Apps) resource.
Another way to view API connections would be to go to the **All Resources** blade and filter it by type *API connection*. This way allows the selection, tagging, and deletion of multiple connections at once.
sentinel Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-advanced-threat-monitoring.md
In this tutorial, you:
> * Learn how to investigate Defender for IoT alerts in Microsoft Sentinel incidents > * Learn about the analytics rules, workbooks, and playbooks deployed to your Microsoft Sentinel workspace with the **Microsoft Defender for IoT** solution
+> [!IMPORTANT]
+>
+> The Microsoft Sentinel content hub experience is currently in **PREVIEW**, as is the **Microsoft Defender for IoT** solution. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Prerequisites Before you start, make sure you have:
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
After understanding how roles and permissions work in Microsoft Sentinel, you ca
|User type |Role |Resource group |Description | ||||| |**Security analysts** | [Microsoft Sentinel Responder](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) | Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. |
-| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules and run playbooks. <br><br>**Note**: This role also allows users to modify playbooks. |
+| | [Logic Apps Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. |
|**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. |
-| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules and run playbooks. <br><br>**Note**: This role also allows users to modify playbooks. |
+| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. |
| **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
After understanding how roles and permissions work in Microsoft Sentinel, you ca
In this article, you learned how to work with roles for Microsoft Sentinel users and what each role enables users to do.
-Find blog posts about Azure security and compliance at the [Microsoft Sentinel Blog](https://aka.ms/azuresentinelblog).
+Find blog posts about Azure security and compliance at the [Microsoft Sentinel Blog](https://aka.ms/azuresentinelblog).
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Follow these steps to create a new playbook in Microsoft Sentinel:
1. The drop-down menu that appears under **Create** gives you three choices for creating playbooks:
- 1. If you're creating a **Standard** playbook (the new kind - see [Two types of Logic Apps](automate-responses-with-playbooks.md#two-types-of-logic-apps)), select **Blank playbook** and then follow the steps in the **Logic Apps Standard** tab below.
+ 1. If you're creating a **Standard** playbook (the new kind - see [Logic app types](automate-responses-with-playbooks.md#logic-app-types)), select **Blank playbook** and then follow the steps in the **Logic Apps Standard** tab below.
1. If you're creating a **Consumption** playbook (the original, classic kind), then, depending on which trigger you want to use, select either **Playbook with incident trigger** or **Playbook with alert trigger**. Then, continue following the steps in the **Logic Apps Consumption** tab below.
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
Title: Best practices for improving performance using Azure Service Bus description: Describes how to use Service Bus to optimize performance when exchanging brokered messages. Previously updated : 02/16/2022 Last updated : 09/28/2022 ms.devlang: csharp
Throughout this article, the term "client" refers to any entity that accesses Se
## Resource planning and considerations
-As with any technical resourcing, prudent planning is key in ensuring that Azure Service Bus is providing the performance that your application expects. The right configuration or topology for your Service Bus namespaces depends on a host of factors involving your application architecture and how each of the Service Bus features are used.
+As with any technical resourcing, prudent planning is key in ensuring that Azure Service Bus is providing the performance that your application expects. The right configuration or topology for your Service Bus namespaces depends on a host of factors involving your application architecture and how each of the Service Bus features is used.
### Pricing tier
-Service Bus offers various pricing tiers. It is recommended to pick the appropriate tier for your application requirements.
+Service Bus offers various pricing tiers. It's recommended to pick the appropriate tier for your application requirements.
* **Standard tier** - Suited for developer/test environments or low throughput scenarios where the applications are **not sensitive** to throttling.
- * **Premium tier** - Suited for production environments with varied throughput requirements where predictable latency and throughput is required. Additionally, Service Bus premium namespaces can be [auto scaled](automate-update-messaging-units.md) and can be enabled to accommodate spikes in throughput.
+ * **Premium tier** - Suited for production environments with varied throughput requirements where predictable latency and throughput are required. Additionally, Service Bus premium namespaces can be [auto scaled](automate-update-messaging-units.md) and can be enabled to accommodate spikes in throughput.
> [!NOTE] > If the right tier is not picked, there is a risk of overwhelming the Service Bus namespace which may lead to [throttling](service-bus-throttling.md).
As expected, throughput is higher for smaller message payloads that can be batch
#### Benchmarks
-Here is a [GitHub sample](https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance) which you can run to see the expected throughput you will receive for your SB namespace. In our [benchmark tests](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722), we observed approximately 4 MB/second per Messaging Unit (MU) of ingress and egress.
+Here's a [GitHub sample](https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance) which you can run to see the expected throughput you'll receive for your SB namespace. In our [benchmark tests](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722), we observed approximately 4 MB/second per Messaging Unit (MU) of ingress and egress.
The benchmarking sample doesn't use any advanced features, so the throughput your applications observe will be different based on your scenarios.
Using certain Service Bus features may require compute utilization that may decr
7. De-duplication & look back time window. 8. Forward to (forwarding from one entity to another).
-If your application leverages any of the above features and you are not receiving the expected throughput, you can review the **CPU usage** metrics and consider scaling up your Service Bus Premium namespace.
+If your application leverages any of the above features and you aren't receiving the expected throughput, you can review the **CPU usage** metrics and consider scaling up your Service Bus Premium namespace.
You can also utilize Azure Monitor to [automatically scale the Service Bus namespace](automate-update-messaging-units.md).
To increase the throughput of a queue, topic, or subscription, Service Bus batch
Additional store operations that occur during this interval are added to the batch. Batched store access only affects **Send** and **Complete** operations; receive operations aren't affected. Batched store access is a property on an entity. Batching occurs across all entities that enable batched store access.
-When creating a new queue, topic or subscription, batched store access is enabled by default.
+When you create a new queue, topic or subscription, batched store access is enabled by default.
# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
Batched store access doesn't affect the number of billable messaging operations.
When a message is prefetched, the service locks the prefetched message. With the lock, the prefetched message can't be received by a different receiver. If the receiver can't complete the message before the lock expires, the message becomes available to other receivers. The prefetched copy of the message remains in the cache. The receiver that consumes the expired cached copy will receive an exception when it tries to complete that message. By default, the message lock expires after 60 seconds. This value can be extended to 5 minutes. To prevent the consumption of expired messages, set the cache size smaller than the number of messages that a client can consume within the lock timeout interval.
-When using the default lock expiration of 60 seconds, a good value for `PrefetchCount` is 20 times the maximum processing rates of all receivers of the factory. For example, a factory creates three receivers, and each receiver can process up to 10 messages per second. The prefetch count shouldn't exceed 20 X 3 X 10 = 600. By default, `PrefetchCount` is set to 0, which means that no additional messages are fetched from the service.
+When you use the default lock expiration of 60 seconds, a good value for `PrefetchCount` is 20 times the maximum processing rates of all receivers of the factory. For example, a factory creates three receivers, and each receiver can process up to 10 messages per second. The prefetch count shouldn't exceed 20 X 3 X 10 = 600. By default, `PrefetchCount` is set to 0, which means that no additional messages are fetched from the service.
Prefetching messages increases the overall throughput for a queue or subscription because it reduces the overall number of message operations, or round trips. Fetching the first message, however, will take longer (because of the increased message size). Receiving prefetched messages from the cache will be faster because these messages have already been downloaded by the client.
While using these approaches together, consider the following cases -
There are some challenges with having a greedy approach, that is, keeping the prefetch count high, because it implies that the message is locked to a particular receiver. The recommendation is to try out prefetch values between the thresholds mentioned above and empirically identify what fits.
-## Multiple queues
+## Multiple queues or topics
If a single queue or topic can't handle the expected, use multiple messaging entities. When using multiple entities, create a dedicated client for each entity, instead of using the same client for all entities.
+More queues or topics mean that you have more entities to manage at deployment time. From a scalability perspective, there really isn't too much of a difference that you would notice as Service Bus already spreads the load across multiple logs internally, so if you use six queues or topics or two queues or topics won't make a material difference.
+
+The tier of service you use impacts performance predictability. If you choose **Standard** tier, throughput and latency are best effort over a shared multi-tenant infrastructure. Other tenants on the same cluster may impact your throughput. If you choose **Premium**, you get resources that give you predictable performance, and your multiple queues or topics get processed out of that resource pool. For more information, see [Pricing tiers](#pricing-tier).
+ ## Scenarios The following sections describe typical messaging scenarios and outline the preferred Service Bus settings. Throughput rates are classified as small (less than 1 message/second), moderate (1 message/second or greater but less than 100 messages/second) and high (100 messages/second or greater). The number of clients are classified as small (5 or fewer), moderate (more than 5 but less than or equal to 20), and large (more than 20).
To maximize throughput, follow these guidelines:
Goal: Maximize the throughput of a topic with a large number of subscriptions. A message is received by many subscriptions, which means the combined receive rate over all subscriptions is much larger than the send rate. The number of senders is small. The number of receivers per subscription is small.
-Topics with a large number of subscriptions typically expose a low overall throughput if all messages are routed to all subscriptions. It's because each message is received many times, and all messages in a topic and all its subscriptions are stored in the same store. The assumption here is that the number of senders and number of receivers per subscription is small. Service Bus supports up to 2,000 subscriptions per topic.
+Topics with a large number of subscriptions typically expose a low overall throughput if all messages are routed to all subscriptions. It's because each message is received many times, and all messages in a topic and all its subscriptions are stored in the same store. The assumption here's that the number of senders and number of receivers per subscription is small. Service Bus supports up to 2,000 subscriptions per topic.
To maximize throughput, try the following steps:
service-bus-messaging Service Bus Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-transactions.md
Title: Overview of transaction processing in Azure Service Bus description: This article gives you an overview of transaction processing and the send via feature in Azure Service Bus. Previously updated : 03/21/2022 Last updated : 09/28/2022 ms.devlang: csharp
This article discusses the transaction capabilities of Microsoft Azure Service Bus. Much of the discussion is illustrated by the [AMQP Transactions with Service Bus sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TransactionsAndSendVia/TransactionsAndSendVia/AMQPTransactionsSendVia). This article is limited to an overview of transaction processing and the *send via* feature in Service Bus, while the Atomic Transactions sample is broader and more complex in scope. > [!NOTE]
-> The basic tier of Service Bus doesn't support transactions. The standard and premium tiers support transactions. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
+> - The basic tier of Service Bus doesn't support transactions. The standard and premium tiers support transactions. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
+> - Mixing management and messaging operations in a transaction isn't supported.
## Transactions in Service Bus
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
This article provides an overview of Service Connector.
Any application that runs on Azure compute services and requires a backing service, can use Service Connector. Find below some examples that can use Service Connector to simplify service-to-service connection experience.
-* **WebApp + DB:** Use Service Connector to connect PostgreSQL, MySQL, or Cosmos DB to your App Service.
-* **WebApp + Storage:** Use Service Connector to connect to Azure Storage accounts and use your preferred storage products easily in your App Service.
-* **Spring Cloud + Database:** Use Service Connector to connect PostgreSQL, MySQL, SQL DB or Cosmos DB to your Spring Cloud application.
-* **Spring Cloud + Apache Kafka:** Service Connector can help you connect your Spring Cloud application to Apache Kafka on Confluent Cloud.
+* **WebApp/Container Apps/Spring Apps + DB:** Use Service Connector to connect PostgreSQL, MySQL, or Cosmos DB to your App Service/Container Apps/Spring Apps.
+* **WebApp/Container Apps/Spring Apps + Storage:** Use Service Connector to connect to Azure Storage accounts and use your preferred storage products easily for any of your apps.
+* **WebApp/Container Apps/Spring Apps + Messaging
See [what services are supported in Service Connector](#what-services-are-supported-in-service-connector) to see more supported services and application patterns.
spring-apps How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-appdynamics-java-agent-monitor.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌️ Enterprise tier
This article explains how to use the AppDynamics Java Agent to monitor Spring Boot applications in Azure Spring Apps.
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
zone_pivot_groups: spring-apps-tier-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌️ Enterprise tier
This article explains how to monitor applications by using the Application Insights Java agent in Azure Spring Apps.
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌️ Enterprise tier
This article shows you how to use Dynatrace OneAgent to monitor Spring Boot applications in Azure Spring Apps.
To add the key/value pairs using the Azure portal, use the following steps:
1. Navigate to the list of your existing applications.
- :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Apps section." lightbox="media/dynatrace-oneagent/existing-applications.png":::
+ :::image type="content" source="media/dynatrace-oneagent/existing-applications.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps section." lightbox="media/dynatrace-oneagent/existing-applications.png":::
1. Select an application to navigate to the **Overview** page of the application.
environment_variables = {
} ```
-### Automate provisioning using an Bicep file
+### Automate provisioning using a Bicep file
To configure the environment variables in a Bicep file, add the following code to the file, replacing the *\<...>* placeholders with your own values. For more information, see [Microsoft.AppPlatform Spring/apps/deployments](/azure/templates/microsoft.appplatform/spring/apps/deployments?tabs=bicep).
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
Title: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
-description: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
+description: Learn how to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
Previously updated : 02/09/2022 Last updated : 09/23/2022
In Azure Spring Apps, the existing Standard tier already supports compiling user
## Build Agent Pool
-Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Apps using the **VMware Tanzu settings**.
+Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool when you create a new service instance of Azure Spring Apps using the **VMware Tanzu settings**.
-The Build Agent Pool scale set sizes available are:
+The following Build Agent Pool scale set sizes are available:
| Scale Set | CPU/Gi | |--||
The Build Agent Pool scale set sizes available are:
| S4 | 5 vCPU, 10 Gi | | S5 | 6 vCPU, 12 Gi |
-The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size.
+The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size on the **Build Service** page after you've created the service instance.
## Default Builder and Tanzu Buildpacks
The following list shows the Tanzu Buildpacks available in Azure Spring Apps Ent
- tanzu-buildpacks/nodejs - tanzu-buildpacks/python
-For details about Tanzu Buildpacks, see [Using the Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html).
- ## Build apps using a custom builder Besides the `default` builder, you can also create custom builders with the provided buildpacks. All the builders configured in a Spring Cloud Service instance are listed in the **Build Service** section under **VMware Tanzu components**. Select **Add** to create a new builder. The image below shows the resources you should use to create the custom builder. You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html), but the builder name is read only. You can delete any custom builder when the builder isn't used in a deployment, but the `default` builder is read only.
If the builder isn't specified, the `default` builder will be used. The builder
You can also configure the build environment and build resources by using the following command: ```azurecli
-az spring-cloud app deploy \
+az spring app deploy \
--name <app-name> \ --build-env <key1=value1>, <key2=value2> \ --build-cpu <build-cpu-size> \
az spring-cloud app deploy \
If you're using the `tanzu-buildpacks/java-azure` buildpack, we recommend that you set the `BP_JVM_VERSION` environment variable in the `build-env` argument.
-When you use a custom builder in an app deployment, the builder can't make edits and deletions. If you want to change the configuration, create a new builder and use the new builder to deploy the app. After you deploy the app with the new builder, the deployment is linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and make edits and deletions.
+When you use a custom builder in an app deployment, the builder can't make edits and deletions. If you want to change the configuration, create a new builder. Use the new builder to deploy the app.
+
+After you deploy the app with the new builder, the deployment is linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and make edits and deletions.
## Real-time build logs
Currently, buildpack binding only supports binding the buildpacks listed below.
- [ElasticAPM Partner Buildpack](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html#elastic-apm). - [Elastic Configuration](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html).
+Not all Tanzu Buildpacks support all service binding types. The following table shows the binding types that are supported by Tanzu Buildpacks and Tanzu Partner Buildpacks.
+
+|Buildpack|ApplicationInsights|NewRelic|AppDynamics|Dynatrace|ElasticAPM|
+||-|--|--||-|
+|Java |✅|✅|✅|✅|✅|
+|Dotnet|❌|❌|❌|✅|❌|
+|Go |❌|❌|❌|✅|❌|
+|Python|❌|❌|❌|❌|❌|
+|NodeJS|❌|✅|✅|✅|✅|
+
+To edit service bindings for the builder, select **Edit**. After a builder is bound to the service bindings, the service bindings are enabled for an app deployed with the builder.
++
+> [!NOTE]
+> When configuring environment variables for APM bindings, use key names without a prefix. For example, do not use a `DT_` prefix for a Dynatrace binding. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
+ ## Manage buildpack bindings You can manage buildpack bindings with the Azure portal or the Azure CLI.
Follow these steps to view the current buildpack bindings:
1. Open the [Azure portal](https://portal.azure.com/?AppPlatformExtension=entdf#home). 1. Select **Build Service**. 1. Select **Edit** under the **Bindings** column to view the bindings configured under a builder.
+
++
+### Create a buildpack binding
+
+To create a buildpack binding, select **Unbound** on the **Edit Bindings** page, specify binding properties, and then select **Save**.
### Unbind a buildpack binding
-There are two ways to unbind a buildpack binding. You can either select the **Bound** hyperlink and then select **Unbind binding**, or select **Edit Binding** and then select **Unbind**.
+You can unbind a buildpack binding by using the **Unbind binding** command, or by editing binding properties.
+
+To use the **Unbind binding** command, select the **Bound** hyperlink, and then select **Unbind binding**.
++
+To unbind a buildpack binding by editing binding properties, select **Edit Binding**, and then select **Unbind**.
+
-If you unbind a binding, the bind status will change from **Bound** to **Unbound**.
+When you unbind a binding, the bind status changes from **Bound** to **Unbound**.
### [Azure CLI](#tab/azure-cli)
storage-mover Job Definition Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/job-definition-create.md
Title: How to define and start a migration job
-description: To migrate a share, create a job definition in a project and start it.
+ Title: How to define a migration job
+description: To migrate a share, create a job definition in a project.
Previously updated : 09/20/2022 Last updated : 09/26/2022 <!--
Use the `New-AzStorageMoverJobDefinition` cmdlet to create new job definition re
```powershell
-## Set variables
+## Set variables
$subscriptionID = "Your subscription ID" $resourceGroupName = "Your resource group name" $storageMoverName = "Your storage mover name"
-## Log into Azure with your Azure credentials
+## Log into Azure with your Azure credentials
Connect-AzAccount -SubscriptionId $subscriptionID
-## Define the source endpoint: an NFS share in this example
-## There is a separate cmdlet for creating each type of endpoint.
-## (Each storage location type has different properties.)
-## Run "Get-Command -Module Az.StorageMover" to see a full list.
+## Define the source endpoint: an NFS share in this example
+## There is a separate cmdlet for creating each type of endpoint.
+## (Each storage location type has different properties.)
+## Run "Get-Command -Module Az.StorageMover" to see a full list.
$sourceEpName = "Your source endpoint name could be the name of the share" $sourceEpDescription = "Optional, up to 1024 characters"
-$sourceEpHost = "The IP address or DNS name of the device (NAS / SERVER) that hosts your source share"
+$sourceEpHost = "The IP address or DNS name of the source share NAS or server"
$sourceEpExport = "The name of your source share"
-## Note that Host and Export will be concatenated to Host:/Export to form the full path to the source NFS share
+## Note that Host and Export will be concatenated to Host:/Export to form the full path
+## to the source NFS share
New-AzStorageMoverNfsEndpoint ` -ResourceGroupName $resourceGroupName `
New-AzStorageMoverNfsEndpoint `
-Export $sourceEpExport ` -Description $sourceEpDescription # Description optional
-## Define the target endpoint: an Azure blob container in this example
-$targetEpName = "Your target endpoint name could be the name of the target blob container"
+## Define the target endpoint: an Azure blob container in this example
+$targetEpName = "Target endpoint or blob container name"
$targetEpDescription = "Optional, up to 1024 characters" $targetEpContainer = "The name of the target container in Azure"
-$targetEpSaResourceId = /subscriptions/<GUID>/resourceGroups/<name>/providers/Microsoft.Storage/storageAccounts/<storageAccountName>
-## Note: the target storage account can be in a different subscription and region than the storage mover resource.
-## Only the storage account resource ID contains a fully qualified reference.
+$targetEpSaResourceId = /subscriptions/<GUID>/resourceGroups/<name>/providers/`
+ Microsoft.Storage/storageAccounts/<storageAccountName>
+## Note: the target storage account can be in a different subscription and region than
+## the storage mover resource.
+##
+## Only the storage account resource ID contains a fully qualified reference.
New-AzStorageMoverAzStorageContainerEndpoint ` -ResourceGroupName $resourceGroupName `
New-AzStorageMoverAzStorageContainerEndpoint `
-Description $targetEpDescription # Description optional ## Create a job definition resource
-$projectName = "Your project name"
-$jobDefName = "Your job definition name"
+$projectName = "Your project name"
+$jobDefName = "Your job definition name"
$JobDefDescription = "Optional, up to 1024 characters"
-$jobDefCopyMode = "Additive"
-$agentName = "The name of one of your agents previously registered to the same storage mover resource"
+$jobDefCopyMode = "Additive"
+$agentName = "The name of an agent previously registered to the same storage mover resource"
New-AzStorageMoverJobDefinition `
New-AzStorageMoverJobDefinition `
-AgentName $agentName ` -Description $sourceEpDescription # Description optional
-## When you are ready to start migrating, you can run the job definition
-Start-AzStorageMoverJobDefinition `
- -JobDefinitionName $jobDefName `
- -ProjectName $projectName `
- -ResourceGroupName $resourceGroupName `
- -StorageMoverName $storageMoverName
- ```
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
When we copy the data over to your Gen2-enabled account, we automatically create
When you copy the data over to your Gen2-enabled account, two factors that can affect performance are the number of files and the amount of metadata you have. For example, many small files can affect the performance of the migration.
+#### Will WebHDFS File System API's supported on Gen2 account post migraiton?
+
+WebHDFS File System APIs of Gen1 will be supported on Gen2 but with certain deviations, and only limited functionality is supported via the compatibilty layer. Customers should plan to levarage ADLS Gen2-specific APIs for better performance and features.
+ ## Next steps -- Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md).
+- Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md).
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
A lifecycle management policy must be read or written in full. Partial updates a
> [!NOTE] > A lifecycle management policy can't change the tier of a blob that uses an encryption scope.
+> [!NOTE]
+> The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](/azure/storage/blobs/immutable-storage-overview).
+ ## See also - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
storage Point In Time Restore Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-manage.md
You can use point-in-time restore to restore one or more sets of block blobs to
To learn more about point-in-time restore, see [Point-in-time restore for block blobs](point-in-time-restore-overview.md).
+> [!NOTE]
+> Point-in-time restore is supported for general-purpose v2 storage accounts in the standard performance tier only. Only data in the hot and cool access tiers can be restored with point-in-time restore.
+ > [!CAUTION] > Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers and blobs to protect against accidental deletion. For more information, see [Soft delete for containers](soft-delete-container-overview.md) and [Soft delete for blobs](soft-delete-blob-overview.md).
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview) | Microsoft Docs description: Optimize the performance of your SSH File Transfer Protocol (SFTP) requests by using the recommendations in this article.-+ Last updated 09/13/2022 -+
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions in Azure Storage
+ Title: Actions and attributes for Azure role assignment conditions for Azure Blob Storage
-description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) in Azure Storage.
+description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Blob Storage.
Previously updated : 09/14/2022 Last updated : 09/28/2022
-# Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
+# Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview.
In this preview, storage accounts support the following suboperations:
> | [Sets the access tier on a blob](#sets-the-access-tier-on-a-blob) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` | `Blob.Write.Tier` | > | [Write to a blob with blob index tags](#write-to-a-blob-with-blob-index-tags) | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` <br/> `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action` | `Blob.Write.WithTagHeaders` |
-## Azure Blob storage actions and suboperations
+## Azure Blob Storage actions and suboperations
-This section lists the supported Azure Blob storage actions and suboperations you can target for conditions.
+This section lists the supported Azure Blob Storage actions and suboperations you can target for conditions.
### List blobs
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) | > | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
-## Azure Blob storage attributes
+## Azure Blob Storage attributes
-This section lists the Azure Blob storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
+This section lists the Azure Blob Storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
> [!NOTE] > Attributes and values listed are considered case-insensitive, unless stated otherwise.
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Title: Example Azure role assignment conditions (preview) - Azure RBAC
+ Title: Example Azure role assignment conditions for Blob Storage (preview)
-description: Example Azure role assignment conditions for Azure attribute-based access control (Azure ABAC).
+description: Example Azure role assignment conditions for Blob Storage (preview).
Previously updated : 09/01/2022 Last updated : 09/28/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
-# Example Azure role assignment conditions (preview)
+# Example Azure role assignment conditions for Blob Storage (preview)
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article list some examples of role assignment conditions.
+This article list some examples of role assignment conditions for controlling access to Azure Blob Storage.
## Prerequisites
Here are the settings to add this condition using the Azure portal.
## Next steps - [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](storage-auth-abac-portal.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](storage-auth-abac-attributes.md)
- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md) - [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
Title: Security considerations for Azure role assignment conditions in Azure Storage (preview)
+ Title: Security considerations for Azure role assignment conditions in Azure Blob Storage (preview)
description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC).
-# Security considerations for Azure role assignment conditions in Azure Storage (preview)
+# Security considerations for Azure role assignment conditions in Azure Blob Storage (preview)
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview.
For conditions on the source blob, `@Resource` conditions on the `Microsoft.Stor
## See also - [Authorize access to blobs using Azure role assignment conditions (preview)](storage-auth-abac.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](storage-auth-abac-attributes.md)
- [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md)
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Title: Authorize access to blobs using Azure role assignment conditions (preview)
+ Title: Authorize access to Azure Blob Storage using Azure role assignment conditions (preview)
-description: Authorize access to Azure blobs and Azure Data Lake Storage Gen2 (ADLS G2) using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes.
+description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 (ADLS G2) using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes.
Previously updated : 09/14/2022 Last updated : 09/28/2022
-# Authorize access to blobs using Azure role assignment conditions (preview)
+# Authorize access to Azure Blob Storage using Azure role assignment conditions (preview)
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview.
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
There are two ways to initiate a conversion:
#### Customer-initiated conversion (preview) > [!IMPORTANT]
-> Customer initiated conversion is currently in preview and available in all public ZRS regions except for the following:
+> Customer-initiated conversion is currently in preview and available in all public ZRS regions except for the following:
> > - (Europe) West Europe > - (Europe) UK South
There are two ways to initiate a conversion:
> - (North America) East US > - (North America) East US 2 >
+> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify **CustomerInitiatedMigration** as the feature name.
+>
> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
storage Storage Auth Aad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-aad-app.md
Title: Authorize access to blob or queue data from a native or web application
-description: Use Azure Active Directory to authenticate from within a client application, acquire an OAuth 2.0 token, and authorize requests to Azure Blob storage and Queue storage.
+description: Use Azure Active Directory to authenticate from within a client application, acquire an OAuth 2.0 access token, and authorize requests to Azure Blob storage and Queue storage.
Previously updated : 10/11/2021 Last updated : 09/28/2022
A key advantage of using Azure Active Directory (Azure AD) with Azure Blob stora
This article shows how to configure your native application or web application for authentication with the Microsoft identity platform using a sample application that is available for download. The sample application features .NET, but other languages use a similar approach. For more information about the Microsoft identity platform, see [Microsoft identity platform overview](../../active-directory/develop/v2-overview.md).
-For an overview of the OAuth 2.0 code grant flow, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+For an overview of the OAuth 2.0 authorization code flow, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
## About the sample application
-The sample application provides an end-to-end experience that shows how to configure a web application for authentication with Azure AD in a local development environment. To view and run the sample application, first clone or download it from [GitHub](https://github.com/Azure-Samples/storage-dotnet-azure-ad-msal). Then follow the steps outlined in the article to configure an Azure app registration and update the application for your environment.
+The sample application provides an end-to-end experience that shows how to configure a web application for authentication with Azure AD in a local development environment. To view and run the sample application, first clone or download it from [GitHub](https://github.com/Azure-Samples/storage-dotnet-azure-ad-msal). Then follow the steps outlined in the article to configure an Azure AD app registration and update the application for your environment.
## Assign a role to an Azure AD security principal
To authenticate a security principal from your Azure Storage application, first
## Register your application with an Azure AD tenant
-The first step in using Azure AD to authorize access to storage resources is registering your client application with an Azure AD tenant from the [Azure portal](https://portal.azure.com). When you register your client application, you supply information about the application to Azure AD. Azure AD then provides a client ID (also called an *application ID*) that you use to associate your application with Azure AD at runtime. To learn more about the client ID, see [Application and service principal objects in Azure Active Directory](../../active-directory/develop/app-objects-and-service-principals.md). To register your Azure Storage application, follow the steps shown in [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+The first step in using Azure AD to authorize access to storage resources is registering your client application in an Azure AD tenant by using the [Azure portal](https://portal.azure.com). When you register your client application, you supply information about the application to Azure AD. Azure AD then provides a client ID (also called an *application ID*) that you use to associate your application with Azure AD at runtime. To learn more about the client ID, see [Application and service principal objects in Azure Active Directory](../../active-directory/develop/app-objects-and-service-principals.md). To register your Azure Storage application, follow the steps shown in [Quickstart: Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
The following image shows common settings for registering a web application. Note that in this example, the redirect URI is set to `http://localhost:5000/signin-oidc` for testing the sample application in the development environment. You can modify this setting later under the **Authentication** setting for your registered application in the Azure portal: :::image type="content" source="media/storage-auth-aad-app/app-registration.png" alt-text="Screenshot showing how to register your storage application with Azure AD"::: > [!NOTE]
-> If you register your application as a native application, you can specify any valid URI for the **Redirect URI**. For native applications, this value does not have to be a real URL. For web applications, the redirect URI must be a valid URI, because it specifies the URL to which tokens are provided.
+> If you register your application as a native application, you can specify any valid URI for the **Redirect URI**. For native applications, this value does not have to be a real URL. For web applications, the redirect URI must be a valid URI because it specifies the URL to which tokens are provided.
-After you've registered your application, you'll see the application ID (or client ID) under **Settings**:
+After you've registered your application, you'll see the **Application (client) ID** under **Settings**:
:::image type="content" source="media/storage-auth-aad-app/app-registration-client-id.png" alt-text="Screenshot showing the client ID":::
The application needs a client secret to prove its identity when requesting a to
![Screenshot showing client secret](media/storage-auth-aad-app/client-secret.png)
-### Enable implicit grant flow
+### Enable ID tokens
-Next, configure implicit grant flow for your application. Follow these steps:
+Next, tell the identity platform to also issue ID tokens for the app by enabling the *hybrid flow*. The hybrid flow combines the use of the authorization code grant for obtaining access tokens and OpenID Connect (OIDC) for getting ID tokens.
+
+To enable issuance of ID tokens for the app, follow these steps:
1. Navigate to your app registration in the Azure portal. 1. In the **Manage** section, select the **Authentication** setting.
Next, configure implicit grant flow for your application. Follow these steps:
## Client libraries for token acquisition
-Once you have registered your application and granted it permissions to access data in Azure Blob storage or Queue storage, you can add code to your application to authenticate a security principal and acquire an OAuth 2.0 token. To authenticate and acquire the token, you can use either one of the [Microsoft identity platform authentication libraries](../../active-directory/develop/reference-v2-libraries.md) or another open-source library that supports OpenID Connect 1.0. Your application can then use the access token to authorize a request against Azure Blob storage or Queue storage.
+Once you've registered your application and granted it permissions to access data in Azure Blob storage or Queue storage, you can add code to your application to authenticate a security principal and acquire an OAuth 2.0 access token. To authenticate and acquire the access token, you can use one of [Microsoft's open-source authentication libraries](../../active-directory/develop/reference-v2-libraries.md) or another library that supports OAuth 2.0 and OpenID Connect 1.0. Your application can then use the access token to authorize a request against Azure Blob storage or Queue storage.
For a list of scenarios for which acquiring tokens is supported, see the [authentication flows](../../active-directory/develop/msal-authentication-flows.md) section of the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md) documentation.
To authenticate a security principal with Azure AD, you need to include some wel
### Azure AD authority
-For Microsoft public cloud, the base Azure AD authority is as follows, where *tenant-id* is your Active Directory tenant ID (or directory ID):
+For the Azure public cloud, the base Azure AD authority is as follows, where *tenant-id* is your Active Directory tenant ID (or directory ID):
`https://login.microsoftonline.com/<tenant-id>/`
-The tenant ID identifies the Azure AD tenant to use for authentication. It is also referred to as the directory ID. To retrieve the tenant ID, navigate to the **Overview** page for your app registration in the Azure portal, and copy the value from there.
+The tenant ID identifies the Azure AD tenant to use for authentication. It's also referred to as the directory ID. To get the tenant ID, navigate to the **Overview** page for your app registration in the Azure portal and copy the value from there.
### Azure Storage resource ID
The tenant ID identifies the Azure AD tenant to use for authentication. It is al
The code example shows how to get an access token from Azure AD. The access token is used to authenticate the specified user and then authorize a request to create a block blob. To get this sample working, first follow the steps outlined in the preceding sections.
-To request the token, you will need the following values from your app's registration:
+To request the access token, you'll need the following values from your app's registration:
- The name of your Azure AD domain. Retrieve this value from the **Overview** page of your Azure Active Directory. - The tenant (or directory) ID. Retrieve this value from the **Overview** page of your app registration.
To run the code sample, create a storage account within the same subscription as
Next, explicitly assign the **Storage Blob Data Contributor** role to the user account under which you will run the sample code. To learn how to assign this role in the Azure portal, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md). > [!NOTE]
-> When you create an Azure Storage account, you are not automatically assigned permissions to access data via Azure AD. You must explicitly assign yourself an Azure role for Azure Storage. You can assign it at the level of your subscription, resource group, storage account, or container or queue.
+> When you create an Azure Storage account, you're not automatically assigned permissions to access data via Azure AD. You must explicitly assign yourself an Azure role for Azure Storage. You can assign it at the level of your subscription, resource group, storage account, or container or queue.
> > Prior to assigning yourself a role for data access, you will be able to access data in your storage account via the Azure portal because the Azure portal can also use the account key for data access. For more information, see [Choose how to authorize access to blob data in the Azure portal](../blobs/authorize-data-operations-portal.md). ### Create a web application that authorizes access to Blob storage with Azure AD
-When your application accesses Azure Storage, it does so on the user's behalf, meaning that blob or queue resources are accessed using the permissions of the user who is logged in. To try this code example, you need a web application that prompts the user to sign in using an Azure AD identity. You can create your own, or use the sample application provided by Microsoft.
+When your application accesses Azure Storage, it does so on the user's behalf, meaning that blob or queue resources are accessed using the permissions of the user who is logged in. To try this code example, you need a web application that prompts the user to sign in using an Azure AD identity. You can create your own or use the sample application provided by Microsoft.
-A completed sample web application that acquires a token and uses it to create a blob in Azure Storage is available on [GitHub](https://aka.ms/aadstorage). Reviewing and running the completed sample may be helpful for understanding the code examples. For instructions about how to run the completed sample, see the section titled [View and run the completed sample](#view-and-run-the-completed-sample).
+A completed sample web application that acquires an access token and uses it to create a blob in Azure Storage is available on [GitHub](https://aka.ms/aadstorage). Reviewing and running the completed sample may be helpful for understanding the following code examples. For instructions about how to run the completed sample, see the section titled [View and run the completed sample](#view-and-run-the-completed-sample).
-#### Add references and using statements
+#### Add assembly references and using directives
-From Visual Studio, install the Azure Storage client library. From the **Tools** menu, select **NuGet Package Manager**, then **Package Manager Console**. Type the following commands into the console window to install the necessary packages from the Azure Storage client library for .NET:
+In Visual Studio, install the Azure Storage client library and the authentication library. From the **Tools** menu, select **NuGet Package Manager**, then **Package Manager Console**. Type the following commands into the console window to install the necessary packages for the Azure Storage client library for .NET and the Microsoft.Identity.Web authentication library:
# [.NET v12 SDK](#tab/dotnet)
Install-Package Azure.Storage.Blobs
Install-Package Microsoft.Identity.Web -Version 0.4.0-preview ```
-Next, add the following using statements to the HomeController.cs file:
+Next, add the following using directives to the _HomeController.cs_ file:
```csharp using Microsoft.Identity.Web; //MSAL library for getting the access token
Install-Package Microsoft.Azure.Storage.Blob
Install-Package Microsoft.Identity.Web -Version 0.4.0-preview //or a later version ```
-Next, add the following using statements to the HomeController.cs file:
+Next, add the following using directives to the _HomeController.cs_ file:
```csharp using Microsoft.Identity.Client; //MSAL library for getting the access token
private static async Task<string> CreateBlob(string accessToken)
> [!NOTE]
-> To authorize blob and queue operations with an OAuth 2.0 token, you must use HTTPS.
+> To authorize blob and queue operations with an OAuth 2.0 access token, you must use HTTPS.
-In the example above, the .NET client library handles the authorization of the request to create the block blob. Azure Storage client libraries for other languages also handle the authorization of the request for you. However, if you are calling an Azure Storage operation with an OAuth token using the REST API, then you'll need to construct the **Authorization** header by using the OAuth token.
+In the example above, the .NET client library handles the authorization of the request to create the block blob. Azure Storage client libraries for other languages also handle the authorization of the request for you. However, if you're calling an Azure Storage operation with an OAuth access token using the REST API, then you'll need to construct the **Authorization** header by using the OAuth token.
To call Blob and Queue service operations using OAuth access tokens, pass the access token in the **Authorization** header using the **Bearer** scheme, and specify a service version of 2017-11-09 or higher, as shown in the following example:
public async Task<IActionResult> Blob()
} ```
-Consent is the process of a user granting authorization to an application to access protected resources on their behalf. The Microsoft identity platform supports incremental consent, meaning that a security principal can request a minimum set of permissions initially and add permissions over time as needed. When your code requests an access token, specify the scope of permissions that your app needs. For more information about incremental consent, see [Incremental and dynamic consent](../../active-directory/azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent).
+Consent is the process of a user granting authorization to an application to access protected resources on their behalf. The Microsoft identity platform supports incremental consent, meaning that an application can request a minimum set of permissions initially and request more permissions over time as needed. When your code requests an access token, specify the scope of permissions that your app needs. For more information about incremental consent, see [Incremental and dynamic consent](../../active-directory/azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent).
## View and run the completed sample
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
The following permissions are included on the root directory of a file share:
## Connect to the Azure file share
-Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to connect to the Azure file share using the storage account key and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
+Run the script below from a normal (not elevated) PowerShell terminal to connect to the Azure file share using the storage account key and map the share to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
> [!NOTE] > You might see the **Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share level and the file/directory level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or directory can assign permissions on those new files or directories without using the storage account key. All other file/directory permission assignment requires connecting to the share using the storage account key first.
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 09/19/2022 Last updated : 09/28/2022
To enable AD DS authentication over SMB for Azure file shares, you need to regis
## Option one (recommended): Use AzFilesHybrid PowerShell module
-The cmdlets in the AzFilesHybrid PowerShell module make the necessary modifications and enable the feature for you. Because some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Although we recommend using AzFilesHybrid module, if you're unable to do so, we provide [manual steps](#option-two-manually-perform-the-enablement-actions).
+The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuring Azure Files. It includes cmdlets for domain joining storage accounts to your on-premises Active Directory and configuring your DNS servers. The cmdlets make the necessary modifications and enable the feature for you. Because some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Although we recommend using AzFilesHybrid module, if you're unable to do so, we provide [manual steps](#option-two-manually-perform-the-enablement-actions).
### Download AzFilesHybrid module - If you don't have [.NET Framework 4.7.2](https://dotnet.microsoft.com/download/dotnet-framework/net472) installed, install it now. It's required for the module to import successfully.-- [Download and unzip the AzFilesHybrid module (GA module: v0.2.0+)](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).-- Install and execute the module on a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a service logon account or a computer account in the target AD.
+- [Download and unzip the latest version of the AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).
+- Install and execute the module on a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a service logon account or a computer account in the target AD (such as domain admin).
### Run Join-AzStorageAccount
-The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't support AES-256 encryption. If you choose to run the command manually, you should select the account best suited for your environment. You must run the script using an on-premises AD DS credential that is synced to your Azure AD. The on-premises AD DS credential must have either **Owner** or **Contributor** Azure role on the storage account.
+The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. By default, the script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't currently support AES-256 encryption. If you choose to run the command manually, you should select the account best suited for your environment.
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit (OU) that enforces password expiration, you must update the password before the maximum password age. Failing to update the account password before that date results in authentication failures when accessing Azure file shares. To learn how to update the password, see [Update AD DS account password](storage-files-identity-ad-ds-update-password.md).
The AD DS account created by the cmdlet represents the storage account. If the A
> The `Join-AzStorageAccount` cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. Service logon account passwords can expire in AD if they have a default password expiration age set on the AD domain or OU. Because computer account password changes are driven by the client machine and not AD, they don't expire in AD, although client computers change their passwords by default every 30 days. > For both account types, we recommend you check the password expiration age configured and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
-Replace the placeholder values with your own in the parameters below before executing the script in PowerShell.
+You must run the script below in PowerShell 5.1 on a device that's domain joined to your on-premises AD DS, using an on-premises AD DS credential that's synced to your Azure AD. The on-premises AD DS credential must have either **Owner** or **Contributor** Azure role on the storage account and have permissions to create a service logon account or computer account in the target AD. Replace the placeholder values with your own before executing the script.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
Connect-AzAccount
# Define parameters # $StorageAccountName is the name of an existing storage account that you want to join to AD # $SamAccountName is the name of the to-be-created AD object, which is used by AD as the logon name
-# for the object.
+# for the object. It must be 20 characters or less and has certain character restrictions.
# See https://learn.microsoft.com/windows/win32/adschema/a-samaccountname for more information. $SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>"
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Title: Mount Azure file share to an AD DS-joined VM
-description: Learn how to mount a file share to your on-premises Active Directory Domain Services-joined machines.
+description: Learn how to mount an Azure file share to your on-premises Active Directory Domain Services domain-joined machines.
Previously updated : 06/22/2020 Last updated : 09/27/2022
Before you begin this article, make sure you complete the previous article, [configure directory and file level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
-The process described in this article verifies that your file share and access permissions are set up correctly and that you can access an Azure File share from a domain-joined VM. Share-level Azure role assignment can take some time to take effect.
+The process described in this article verifies that your SMB file share and access permissions are set up correctly and that you can access an Azure file share from a domain-joined VM. Share-level role assignment can take some time to take effect.
Sign in to the client by using the credentials that you granted permissions to, as shown in the following image.
Sign in to the client by using the credentials that you granted permissions to,
## Mounting prerequisites
-Before you can mount the file share, make sure you've gone through the following pre-requisites:
+Before you can mount the Azure file share, make sure you've gone through the following prerequisites:
-- If you are mounting the file share from a client that has previously mounted the file share using your storage account key, make sure that you have disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions to clear the mounted share with storage account key, refer to [FAQ page](./storage-files-faq.md#ad-ds--azure-ad-ds-authentication).-- Your client must have line of sight to your AD DS. If your machine or VM is out of the network managed by your AD DS, you will need to enable VPN to reach AD DS for authentication.
+- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing new connection with Azure AD or AD credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#ad-ds--azure-ad-ds-authentication).
+- Your client must have line of sight to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication.
-Replace the placeholder values with your own values, then use the following command to mount the Azure file share. You always need to mount using the path shown below. Using CNAME for file mount is not supported for identity based authentication (AD DS or Azure AD DS).
+## Mount the file share
-```PSH
-# Always mount your share using.file.core.windows.net, even if you setup a private endpoint for your share.
+Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
+
+Always mount Azure file shares using.file.core.windows.net, even if you set up a private endpoint for your share. Using CNAME for file share mount isn't supported for identity-based authentication (AD DS or Azure AD DS).
+
+```powershell
$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
-if ($connectTestResult.TcpTestSucceeded)
-{
- net use <desired-drive letter>: \\<storage-account-name>.file.core.windows.net\<fileshare-name>
-}
-else
-{
- Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."
+if ($connectTestResult.TcpTestSucceeded) {
+ cmd.exe /C "cmdkey /add:`"<storage-account-name>.file.core.windows.net`" /user:`"localhost\<storage-account-name>`""
+ New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\<file-share-name>" -Persist
+} else {
+ Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."
}- ``` If you run into issues mounting with AD DS credentials, refer to [Unable to mount Azure Files with AD credentials](storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) for guidance.
-If mounting your file share succeeded, then you have successfully enabled and configured on-premises AD DS authentication for your Azure file shares.
+If mounting your file share succeeded, then you've successfully enabled and configured on-premises AD DS authentication for your Azure file share.
## Next steps
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Previously updated : 07/13/2022 Last updated : 09/28/2022
If you registered the Active Directory Domain Services (AD DS) identity/account
> [!NOTE] > A storage account identity in AD DS can be either a service account or a computer account. Service account passwords can expire in AD; however, because computer account password changes are driven by the client machine and not AD, they don't expire in AD.
-To trigger password rotation, you can run the `Update-AzStorageAccountADObjectPassword` command from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.
+To trigger password rotation, you can run the `Update-AzStorageAccountADObjectPassword` cmdlet from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment by a [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md) with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account and uses it to update the password of the registered account in AD DS. Then it regenerates the target Kerberos key of the storage account and updates the password of the registered account in AD DS.
-To prevent password rotation, during the onboarding of the Azure Storage account in the domain, make sure to place the Azure Storage Account into a separate organizational unit in AD DS. Disable Group Policy inheritance on this organizational unit to prevent default domain policies or specific password policies to be applied.
+To prevent password rotation, during the onboarding of the Azure storage account in the domain, make sure to place the Azure storage account into a separate organizational unit in AD DS. Disable Group Policy inheritance on this organizational unit to prevent default domain policies or specific password policies to be applied.
```PowerShell # Update the password of the AD DS account registered for the storage account
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
description: Troubleshoot problems with SMB Azure file shares in Windows. See co
Previously updated : 09/09/2022 Last updated : 09/28/2022
Windows 8, Windows Server 2012, and later versions of each system negotiate requ
### Solution for cause 1 1. Connect from a client that supports SMB encryption (Windows 8/Windows Server 2012 or later).
-2. Connect from a virtual machine in the same datacenter as the Azure storage account that is used for the Azure file share.
-3. Verify the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account if the client does not support SMB encryption.
+2. Connect from a virtual machine (VM) in the same datacenter as the Azure storage account that is used for the Azure file share.
+3. Verify the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account if the client doesn't support SMB encryption.
### Cause 2: Virtual network or firewall rules are enabled on the storage account Network traffic is denied if virtual network (VNET) and firewall rules are configured on the storage account, unless the client IP address or virtual network is allow-listed.
Verify that virtual network and firewall rules are configured properly on the st
### Cause 3: Share-level permissions are incorrect when using identity-based authentication
-If end-users are accessing the Azure file share using Active Directory (AD) or Azure Active Directory Domain Services (Azure AD DS) authentication, access to the file share fails with "Access is denied" error if share-level permissions are incorrect.
+If end users are accessing the Azure file share using Active Directory (AD) or Azure Active Directory Domain Services (Azure AD DS) authentication, access to the file share fails with "Access is denied" error if share-level permissions are incorrect.
### Solution for cause 3
Validate that permissions are configured correctly:
- **Active Directory (AD)** see [Assign share-level permissions to an identity](./storage-files-identity-ad-ds-assign-permissions.md).
- Share-level permission assignments are supported for groups and users that have been synced from the Active Directory (AD) to Azure Active Directory (Azure AD) using Azure AD Connect. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.
+ Share-level permission assignments are supported for groups and users that have been synced from Active Directory Domain Services (AD DS) to Azure Active Directory (Azure AD) using Azure AD Connect. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.
- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign access permissions to an identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-access-permissions-to-an-identity). <a id="error53-67-87"></a>
TcpTestSucceeded : True
> [!Note]
-> The above command returns the current IP address of the storage account. This IP address is not guaranteed to remain the same, and may change at any time. Do not hardcode this IP address into any scripts, or into a firewall configuration.
+> The above command returns the current IP address of the storage account. This IP address is not guaranteed to remain the same, and may change at any time. Don't hardcode this IP address into any scripts, or into a firewall configuration.
### Solution for cause 1 #### Solution 1 ΓÇö Use Azure File Sync as a QUIC endpoint
-Azure File Sync can be used as a workaround to access Azure Files from clients that have port 445 blocked. Although Azure Files doesn't directly support SMB over QUIC, Windows Server 2022 Azure Edition does support the QUIC protocol. You can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. This uses port 443, which is widely open outbound to support HTTPS, instead of port 445. To learn more about this option, see [SMB over QUIC with Azure File Sync](storage-files-networking-overview.md#smb-over-quic).
+You can use Azure File Sync as a workaround to access Azure Files from clients that have port 445 blocked. Although Azure Files doesn't directly support SMB over QUIC, Windows Server 2022 Azure Edition does support the QUIC protocol. You can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. This uses port 443, which is widely open outbound to support HTTPS, instead of port 445. To learn more about this option, see [SMB over QUIC with Azure File Sync](storage-files-networking-overview.md#smb-over-quic).
#### Solution 2 ΓÇö Use VPN or ExpressRoute By setting up a VPN or ExpressRoute from on-premises to your Azure storage account, with Azure Files exposed on your internal network using private endpoints, the traffic will go through a secure tunnel as opposed to over the internet. Follow the [instructions to setup VPN](storage-files-configure-p2s-vpn-windows.md) to access Azure Files from Windows.
Azure Files also supports REST in addition to SMB. REST access works over port 4
System error 53 or system error 87 can occur if NTLMv1 communication is enabled on the client. Azure Files supports only NTLMv2 authentication. Having NTLMv1 enabled creates a less-secure client. Therefore, communication is blocked for Azure Files.
-To determine whether this is the cause of the error, verify that the following registry subkey is not set to a value less than 3:
+To determine whether this is the cause of the error, verify that the following registry subkey isn't set to a value less than 3:
**HKLM\SYSTEM\CurrentControlSet\Control\Lsa > LmCompatibilityLevel**
When you open a file from a mounted Azure file share over SMB, your application/
- `ReadWrite`: a combination of both the `Read` and `Write` sharing modes. - `Delete`: others may delete the file while you have it open.
-Although as a stateless protocol, the FileREST protocol does not have a concept of file handles, it does provide a similar mechanism to mediate access to files and folders that your script, application, or service may use: file leases. When a file is leased, it is treated as equivalent to a file handle with a file sharing mode of `None`.
+Although as a stateless protocol, the FileREST protocol doesn't have a concept of file handles, it does provide a similar mechanism to mediate access to files and folders that your script, application, or service may use: file leases. When a file is leased, it's treated as equivalent to a file handle with a file sharing mode of `None`.
Although file handles and leases serve an important purpose, sometimes file handles and leases might be orphaned. When this happens, this can cause problems modifying or deleting files. You may see error messages like:
If you map an Azure file share as an administrator by using net use, the share a
### Cause
-By default, Windows File Explorer does not run as an administrator. If you run net use from an administrative command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the user account that is logged in does not display the drives if they are mounted under a different user account.
+By default, Windows File Explorer doesn't run as an administrator. If you run net use from an administrative command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the user account that is logged in doesn't display the drives if they're mounted under a different user account.
### Solution Mount the share from a non-administrator command line. Alternatively, you can follow [this TechNet topic](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee844140(v=ws.10)) to configure the **EnableLinkedConnections** registry value.
Be aware that setting the registry key affects all copy operations that are made
### Cause
-This problem can occur if there is no enough cache on client machine for large directories.
+This problem can occur if there isn't enough cache on the client machine for large directories.
### Solution
-To resolve this problem, adjusting the **DirectoryCacheEntrySizeMax** registry value to allow caching of larger directory listings in the client machine:
+To resolve this problem, adjust the **DirectoryCacheEntrySizeMax** registry value to allow caching of larger directory listings in the client machine:
- Location: `HKLM\System\CCS\Services\Lanmanworkstation\Parameters` - Value name: `DirectoryCacheEntrySizeMax`
For example, you can set it to `0x100000` and see if the performance improves.
### Cause
-Error AadDsTenantNotFound happens when you try to [enable Azure Active Directory Domain Services (Azure AD DS) authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) on a storage account where [Azure AD Domain Service(Azure AD DS)](../../active-directory-domain-services/overview.md) is not created on the Azure AD tenant of the associated subscription.
+Error AadDsTenantNotFound happens when you try to [enable Azure Active Directory Domain Services (Azure AD DS) authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) on a storage account where [Azure AD Domain Service(Azure AD DS)](../../active-directory-domain-services/overview.md) isn't created on the Azure AD tenant of the associated subscription.
### Solution
Enable Azure AD DS on the Azure AD tenant of the subscription that your storage
## Unable to mount Azure Files with AD credentials ### Self diagnostics steps
-First, make sure that you have followed through all four steps to [enable Azure Files AD Authentication](./storage-files-identity-auth-active-directory-enable.md).
+First, make sure that you've followed through all four steps to [enable Azure Files AD Authentication](./storage-files-identity-auth-active-directory-enable.md).
-Second, try [mounting Azure file share with storage account key](./storage-how-to-use-files-windows.md). If you failed to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment, detect the incompatible client configuration which would cause access failure for Azure Files, give prescriptive guidance on self-fix and collect the diagnostics traces.
+Second, try [mounting Azure file share with storage account key](./storage-how-to-use-files-windows.md). If the share fails to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment, detect the incompatible client configuration which would cause access failure for Azure Files, give prescriptive guidance on self-fix, and collect the diagnostics traces.
-Third, you can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on [AzFilesHybrid v0.1.2+ version](https://github.com/Azure-Samples/azure-files-samples/releases). You need to run this cmdlet with an AD user that has owner permission on the target storage account.
+Third, you can run the `Debug-AzStorageAccountAuth` cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on [AzFilesHybrid v0.1.2+ version](https://github.com/Azure-Samples/azure-files-samples/releases). You need to run this cmdlet with an AD user that has owner permission on the target storage account.
```PowerShell $ResourceGroupName = "<resource-group-name-here>" $StorageAccountName = "<storage-account-name-here>"
The cmdlet performs these checks below in sequence and provides guidance for fai
### Symptom You may experience either symptoms described below when trying to configure Windows ACLs with File Explorer on a mounted file share:-- After you click on Edit permission under the Security tab, the Permission wizard does not load. -- When you try to select a new user or group, the domain location does not display the right AD DS domain.
+- After you click on Edit permission under the Security tab, the Permission wizard doesn't load.
+- When you try to select a new user or group, the domain location doesn't display the right AD DS domain.
### Solution
This error may occur if a domain controller that holds the RID Master FSMO role
### Error: "Cannot bind positional parameters because no names were given"
-This error is most likely triggered by a syntax error in the Join-AzStorageAccountforAuth command. Check the command for misspellings or syntax errors and verify that the latest version of the AzFilesHybrid module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed.
+This error is most likely triggered by a syntax error in the `Join-AzStorageAccountforAuth` command. Check the command for misspellings or syntax errors and verify that the latest version of the AzFilesHybrid module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed.
## Azure Files on-premises AD DS Authentication support for AES-256 Kerberos encryption
-Azure Files supports AES-256 Kerberos encryption for AD DS authentication with the [AzFilesHybrid module v0.2.2](https://github.com/Azure-Samples/azure-files-samples/releases). AES-256 is the recommended authentication method. If you have enabled AD DS authentication with a module version lower than v0.2.2, you will need to download the latest AzFilesHybrid module (v0.2.2+) and run the PowerShell below. If you have not enabled AD DS authentication on your storage account yet, you can follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module) for enablement.
+Azure Files supports AES-256 Kerberos encryption for AD DS authentication beginning with the AzFilesHybrid module v0.2.2. AES-256 is the recommended authentication method. If you've enabled AD DS authentication with a module version lower than v0.2.2, you'll need to [download the latest AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases) and run the PowerShell below. If you haven't enabled AD DS authentication on your storage account yet, follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module) for enablement.
```PowerShell $ResourceGroupName = "<resource-group-name-here>"
Navigate to the desired storage account in the Azure portal. In the table of con
![A screenshot of the access key pane](./media/storage-troubleshoot-windows-file-connection-problems/access-keys-1.png) # [PowerShell](#tab/azure-powershell)
-The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you will need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
+The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you'll need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
```PowerShell $resourceGroupName = "<resource-group>"
New-AzStorageAccountKey `
``` # [Azure CLI](#tab/azure-cli)
-The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you will need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
+The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you'll need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
```bash resourceGroupName="<resource-group>"
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Here's an example of what a parameter template definition looks like:
} }, "Microsoft.Synapse/workspaces/datasets": {
- "properties": {
- "typeProperties": {
- "*": "="
+ "*": {
+ "properties": {
+ "typeProperties": {
+ "folderPath": "=",
+ "fileName": "="
+ }
} } },
If you're using Git integration with your Azure Synapse workspace and you have a
## Troubleshoot artifacts deployment
-### Use the Synapse workspace deployment task
+### Publish failed: workspace arm file is more then 20mb
-In Azure Synapse, unlike in Data Factory, some artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task.
+There is a file size limitation in git provider, for example, in Azure DevOps the maximum file size is 20Mb. Once the workspace template file size exceeds 20Mb, this error happens when you publish changes in Synapse studio, in which the workspace template file is generated and synced to git. To solve the issue, you can use the Synapse deployment task with **validate** or **validate and deploy** operation to save the workspace template file directly into the pipeline agent and without manual publish in synapse studio.
+
+### Use the Synapse workspace deployment task to deploy Synapse artifacts
+
+In Azure Synapse, unlike in Data Factory, artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task to deploy the artifacts, and use ARM deployment task for ARM resources (pools and workspace) deployment. Meanwhile this extension only supports Synapse templates where resources have type Microsoft.Synapse
++ ### Unexpected token error in release If your parameter file has parameter values that aren't escaped, the release pipeline fails to parse the file and generates an `unexpected token` error. We suggest that you override parameters or use Key Vault to retrieve parameter values. You also can use double escape characters to resolve the issue.+
+### Integration runtime deployment failed
+
+If you have the workspace template generated from a managed Vnet enabled workspace and try to deploy to a regular workspace or vice versa, this error happens.
+### Unexpected character encountered while parsing value
+
+The template can not be parsed the template file. Try by escaping the back slashes, eg. \\\\Test01\\Test
+
+### Failed to fetch workspace info, Not found.
+
+The target workspace info is not correctly configured. Please make sure the service connection which you have created, is scoped to the resource group which has the workspace.
+
+### Artifact deletion failed.
+
+The extension will compare the artifacts present in the publish branch with the template and based on the difference it will delete them. Please make sure you are not trying to delete any artifact which is present in publish branch and some other artifact has a reference or dependency on it.
+
+### Deployment failed with error: json position 0
+
+If you were trying to manually update the template, this error would happen. Please make sure that you have not manually edited the template.
+
+### The document creation or update failed because of invalid reference.
+
+The artifact in synapse can be referenced by another one. If you have parameterized an attribute which is a referenced in an artifact, please make sure to provide correct and non null value to it
+
+### Failed to fetch the deployment status in notebook deployment
+
+The notebook you are trying to deploy is attached to a spark pool in the workspace template file, while in the deployment the pool does not exist in the target workspace. If you don't parameterize the pool name, please make sure that having the same name for the pools between environments.
++++
synapse-analytics Design Guidance For Replicated Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
Previously updated : 11/02/2021 Last updated : 09/27/2022
Replicated tables may not yield the best query performance when:
- The SQL pool is scaled frequently. Scaling a SQL pool changes the number of Compute nodes, which incurs rebuilding the replicated table. - The table has a large number of columns, but data operations typically access only a small number of columns. In this scenario, instead of replicating the entire table, it might be more effective to distribute the table, and then create an index on the frequently accessed columns. When a query requires data movement, SQL pool only moves data for the requested columns.
+> [!TIP]
+> For more guidance on indexing and replicated tables, see the [Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics](cheat-sheet.md#index-your-table).
+ ## Use replicated tables with simple query predicates Before you choose to distribute or replicate a table, think about the types of queries you plan to run against the table. Whenever possible,
synapse-analytics Performance Tuning Result Set Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-result-set-caching.md
Once result set caching is turned ON for a database, results are cached for all
- Queries with built-in functions or runtime expressions that are non-deterministic even when thereΓÇÖs no change in base tablesΓÇÖ data or query. For example, DateTime.Now(), GetDate(). - Queries using user defined functions-- Queries using tables with row level security or column level security enabled
+- Queries using tables with row level security
- Queries returning data with row size larger than 64KB - Queries returning large data in size (>10GB) >[!NOTE]
synapse-analytics Sql Data Warehouse Tables Distribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md
Previously updated : 08/09/2022 Last updated : 09/27/2022
WITH
> [!NOTE] > Multi-column distribution is currently in preview for Azure Synapse Analytics. For more information on joining the preview, see multi-column distribution with [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), or [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
-<!-- Data stored in the distribution column(s) can be updated. Updates to data in distribution column(s) could result in data shuffle operation.-->
+Data stored in the distribution column(s) can be updated. Updates to data in distribution column(s) could result in data shuffle operation.
Choosing distribution column(s) is an important design decision since the values in the hash column(s) determine how the rows are distributed. The best choice depends on several factors, and usually involves tradeoffs. Once a distribution column or column set is chosen, you cannot change it. If you didn't choose the best column(s) the first time, you can use [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create the table with the desired distribution hash key.
RENAME OBJECT [dbo].[FactInternetSales_CustomerKey] TO [FactInternetSales];
To create a distributed table, use one of these statements: - [CREATE TABLE (dedicated SQL pool)](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [CREATE TABLE AS SELECT (dedicated SQL pool)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
+- [CREATE TABLE AS SELECT (dedicated SQL pool)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
description: Learn about the new features and documentation improvements for Azu
Previously updated : 09/14/2022 Last updated : 09/27/2022
# What's new in Azure Synapse Analytics?
-See below for a recent review of what's new in [Azure Synapse Analytics](overview-what-is.md), and also what features are currently in preview. To follow the latest in Azure Synapse news and features, see the [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) and [companion videos on YouTube](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).
+This page is continuously updated with a recent review of what's new in [Azure Synapse Analytics](overview-what-is.md), and also what features are currently in preview. To follow the latest in Azure Synapse news and features, see the [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) and [companion videos on YouTube](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).
For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous monthly updates in Azure Synapse Analytics](whats-new-archive.md).
The following table lists the features of Azure Synapse Analytics that are curre
| **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | | **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| | **Distributed Deep Neural Network Training** | Learn more about new distributed training libraries like Horovod, Petastorm, TensorFlow, and PyTorch in [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
+| **Embed ADX dashboards** | Azure Data Explorer dashboards be [embedded in an iFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
| **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information on this preview feature, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). | | **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).| | **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-data/ba-p/3420904) and [SAP CDC solution in Azure Data Factory](/azure/data-factory/sap-change-data-capture-introduction-architecture).| | **Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).|
-| **Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).|
+| **Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach more disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).|
| **Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more about the usage scenarios and how to enable this preview feature, read [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).| | **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).| | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows.To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
The following table lists the features of Azure Synapse Analytics that have tran
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| July 2022 | **Apache Spark&trade; 3.2 on Synapse Analytics** | Apache Spark&trade; 3.2 on Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).|
+| September 2022 | **MERGE T-SQL syntax** | [MERGE T-SQL syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. As in SQL Server, the MERGE syntax encapsulates INSERTs/UPDATEs/DELETEs into a single high-performance statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).|
+| July 2022 | **Apache Spark&trade; 3.2 for Synapse Analytics** | Apache Spark&trade; 3.2 for Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).|
| July 2022 | **Apache Spark in Azure Synapse Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md).| | June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).| | June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). |
The following table lists the features of Azure Synapse Analytics that have tran
| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).| | October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).|
-## Community
+## Community
This section summarizes new Azure Synapse Analytics community opportunities and the [Azure Synapse Influencers program](https://aka.ms/synapseinfluencers) from Microsoft. Follow us on our [@Azure_Synapse](https://twitter.com/Azure_Synapse/) Twitter account or [#AzureSynapse](https://twitter.com/hashtag/AzureSynapse?src=hashtag_click) for announcements on upcoming Azure Synapse Influencer events in the coming weeks.
This section summarizes new Azure Synapse Analytics community opportunities and
| May 2022 | **Azure Synapse influencer program** | Sign up for our free [Azure Synapse Influencers program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencers Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). | | March 2022 | **Azure Synapse Analytics and Microsoft MVP YouTube video series** | A joint activity with the Azure Synapse product team and the Microsoft MVP community, a new [YouTube MVP Video Series about the Azure Synapse features](https://www.youtube.com/playlist?list=PLzUAjXZBFU9MEK2trKw_PGk4o4XrOzw4H) has launched. See more at the [Azure Synapse Analytics YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).|
-## Apache Spark for Azure Synapse Analytics
+## Apache Spark for Azure Synapse Analytics
This section summarizes recent new features and capabilities of [Apache Spark for Azure Synapse Analytics](spark/apache-spark-overview.md). |**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **New informative Livy error codes** | [More precise error codes](spark/apache-spark-handle-livy-error.md) describe the cause of failure and replaces the previous generic error codes. Previously, all errors in failing Spark jobs surfaced with a generic error code displaying LIVY_JOB_STATE_DEAD. |
| September 2022 | **New query optimization techniques in Apache Spark for Azure Synapse Analytics** | Read the [findings from Microsoft's work](https://vldb.org/pvldb/vol15/p936-rajan.pdf) to gain considerable performance benefits across the board on the reference TPC-DS workload as well as a significant reduction in query plan generation time. | | August 2022 | **Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).| | August 2022 | **Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse preview feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more, see [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).| | July 2022 | **Apache Spark 2.4 enters retirement lifecycle** | With the general availability of the Apache Spark 3.2 runtime, the Azure Synapse runtime for Apache Spark 2.4 enters a 12-month retirement cycle. You should relocate your workloads to the newer Apache Spark 3.2 runtime within this period. Read more at [Apache Spark runtimes in Azure Synapse](spark/apache-spark-version-support.md).| | May 2022 | **Azure Synapse dedicated SQL pool connector for Apache Spark now available in Python** | Previously, the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md) was only available using Scala. Now, [the dedicated SQL pool connector for Apache Spark can be used with Python on Spark 3](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_6). | | May 2022 | **Manage Azure Synapse Apache Spark configuration** | With the new [Apache Spark configurations](./spark/apache-spark-azure-create-spark-configuration.md) feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. |
-| April 2022 | **Apache Spark 3.2 on Synapse Analytics** | Apache Spark 3.2 on Synapse Analytics with preview availability. Review the [official Spark 3.2 release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). |
+| April 2022 | **Apache Spark 3.2 for Synapse Analytics** | Apache Spark 3.2 for Synapse Analytics with preview availability. Review the [official Spark 3.2 release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). |
| April 2022 | **Parameterization for Spark job definition** | You can now assign parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters for the Spark job definition activity. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab). |
-| April 2022 | **Spark notebook snapshot** | You can access a snapshot of the Notebook when there is a Pipeline Notebook run failure or when there is a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). |
+| April 2022 | **Spark notebook snapshot** | You can access a snapshot of the Notebook when there's a Pipeline Notebook run failure or when there's a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). |
| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | | March 2022 | **Performance optimization for Synapse Spark dedicated SQL pool connector** | New improvements to the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) reduce data movement and leverage `COPY INTO`. Performance tests indicated at least ~5x improvement over the previous version. No action is required from the user to leverage these enhancements. For more information, see [Blog: Synapse Spark Dedicated SQL Pool (DW) Connector: Performance Improvements](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_10).| | March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
This section summarizes recent new features and capabilities of Azure Synapse An
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **Gantt chart view** | You can now view your activity runs with a Gantt chart in [Azure Data Factory Integration Runtime monitoring](/azure/data-factory/monitor-integration-runtime). |
+| September 2022 | **Monitoring improvements** | We've released [a new bundle of improvements to the monitoring experience](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/further-adf-monitoring-improvements/ba-p/3607669) based on community feedback. |
+| September 2022 | **Maximum column optimization in mapping dataflow** | For delimited text data sources such as CSVs, a new **maximum columns** setting allows you to [set the maximum number of columns](/azure/data-factory/format-delimited-text#mapping-data-flow-properties). |
+| September 2022 | **NUMBER to integer conversion in Oracle data source connector** | New property to convert Oracle NUMBER type to a corresponding integer type in source via the new property **convertDecimalToInteger**. For more information, see the [Oracle source connector](/azure/data-factory/connector-oracle?tabs=data-factory#oracle-as-source).|
+| September 2022 | **Support for sending a body with HTTP request DELETE method in Web activity** | New support for sending a body (optional) when using the DELETE method in Web activity. For more information, see the available [Type properties for the Web activity](/azure/data-factory/control-flow-web-activity#type-properties). |
| August 2022 | **Mapping data flows now support visual Cast transformation** | You can [use the cast transformation](/azure/data-factory/data-flow-cast) to easily modify the data types of individual columns in a data flow. | | August 2022 | **Default activity timeout changed to 12 hours** | The [default activity timeout is now 12 hours](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729). | | August 2022 | **Pipeline expression builder ease-of-use enhancements** | We've [updated our expression builder UI to make pipeline designing easier](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196). |
This section summarizes recent new features and capabilities of Azure Synapse An
| March 2022 | **Pipeline script activity** | You can now [Transform data by using the Script activity](/azure/data-factory/transform-data-using-script) to invoke SQL commands to perform both DDL and DML. | | December 2021 | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). | - ## Database Templates & Database Designer This section summarizes recent new features and capabilities of [database templates](./database-designer/overview-database-templates.md) and [the database designer](database-designer/quick-start-create-lake-database.md).
This section summarizes recent new quality of life and feature improvements for
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it is now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).|
+| July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it's now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).|
| July 2022 | **Mssparkutils now has spark.stop() method** | A new API `mssparkutils.session.stop()` has been added to the mssparkutils package. This feature becomes handy when there are multiple sessions running against the same Spark pool. The new API is available for Scala and Python. To learn more, see [Stop an interactive session](spark/microsoft-spark-utilities.md#stop-an-interactive-session).| | May 2022 | **Updated Azure Synapse Analyzer Report** | Learn about the new features in [version 2.0 of the Synapse Analyzer report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/updated-synapse-analyzer-report-workload-management-and-ability/ba-p/3580269).| | April 2022 | **Azure Synapse Analyzer Report** | The [Azure Synapse Analyzer Report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analyzer-report-to-monitor-and-improve-azure/ba-p/3276960) helps you identify common issues that may be present in your database that can lead to performance issues.|
This section summarizes recent new quality of life and feature improvements for
| March 2022 | **Code cells with exception to show standard output**| Now in Synapse notebooks, both standard output and exception messages are shown when a code statement fails for Python and Scala languages. For examples, see [Synapse notebooks: Code cells with exception to show standard output](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).| | March 2022 | **Partial output is available for running notebook code cells** | Now in Synapse notebooks, you can see anything you write (with `println` commands, for example) as the cell executes, instead of waiting until it ends. For examples, see [Synapse notebooks: Partial output is available for running notebook code cells ](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).| | March 2022 | **Dynamically control your Spark session configuration with pipeline parameters** | Now in Synapse notebooks, you can use pipeline parameters to configure the session with the notebook %%configure magic. For examples, see [Synapse notebooks: Dynamically control your Spark session configuration with pipeline parameters](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_2).|
-| March 2022 | **Reuse and manage notebook sessions** | Now in Synapse notebooks, it is easy to reuse an active session conveniently without having to start a new one and to see and manage your active sessions in the **Active sessions** list. To view your sessions, select the 3 dots in the notebook and select **Manage sessions.** For examples, see [Synapse notebooks: Reuse and manage notebook sessions](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3).|
+| March 2022 | **Reuse and manage notebook sessions** | Now in Synapse notebooks, it's easy to reuse an active session conveniently without having to start a new one and to see and manage your active sessions in the **Active sessions** list. To view your sessions, select the 3 dots in the notebook and select **Manage sessions.** For examples, see [Synapse notebooks: Reuse and manage notebook sessions](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3).|
| March 2022 | **Support for Python logging** | Now in Synapse notebooks, anything written through the Python logging module is captured, in addition to the driver logs. For examples, see [Synapse notebooks: Support for Python logging](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4).| ## Machine Learning
This section summarizes new guidance and sample project resources for Azure Syna
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **What is the difference between Synapse dedicated SQL pool (formerly SQL DW) and Serverless SQL pool?** | Understand dedicated vs serverless pools and their concurrency. Read more at [basic concepts of dedicated SQL pools and serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/understand-synapse-dedicated-sql-pool-formerly-sql-dw-and/ba-p/3594628).|
+| September 2022 | **Reading Delta Lake in dedicated SQL Pool** | [Sample script](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/Delta%20Lake) to import Delta Lake files directly into the dedicated SQL Pool and support features like time-travel. For an explanation, see [Reading Delta Lake in dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/reading-delta-lake-in-dedicated-sql-pool/ba-p/3571053).|
| September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).| | June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. | | June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). |
This section summarizes recent new security features and settings in Azure Synap
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| August 2022 | **Execute Azure Synapse Spark Notebooks with system-assigned managed identity** | You can [now execute Spark Notebooks with the system-assigned managed identity (or workspace managed identity)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_30) by enabling *Run as managed identity* from the **Configure** session menu. With this feature, you will be able to validate that your notebook works as expected when using the system-assigned managed identity, before using the notebook in a pipeline. For more information, see [Managed identity for Azure Synapse](synapse-service-identity.md).|
+| August 2022 | **Execute Azure Synapse Spark Notebooks with system-assigned managed identity** | You can [now execute Spark Notebooks with the system-assigned managed identity (or workspace managed identity)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_30) by enabling *Run as managed identity* from the **Configure** session menu. With this feature, you'll be able to validate that your notebook works as expected when using the system-assigned managed identity, before using the notebook in a pipeline. For more information, see [Managed identity for Azure Synapse](synapse-service-identity.md).|
| July 2022 | **Changes to permissions needed for publishing to Git** | Now, only Git permissions and the Synapse Artifact Publisher (Synapse RBAC) role are needed to commit changes in Git-mode. For more information, see [Access control enforcement in Synapse Studio](security/synapse-workspace-access-control-overview.md#access-control-enforcement-in-synapse-studio).| | April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| | March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/azure/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.|
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **Logstash connector proxy configuration** | The Azure Data Explorer (ADX) Logstash plugin enables you to process events from [Logstash](https://github.com/Azure/logstash-output-kusto) into an ADX database for analysis. Version 1.0.5 now supports HTTP/HTTPS proxies.|
+| September 2022 | **Kafka support for Protobuf format** | The [ADX Kafka sink connector](https://www.confluent.io/hub/microsoftcorporation/kafka-sink-azure-kusto) leverages the Kafka Connect framework and provides an adapter to ingest data from Kafka in JSON, Avro, String, and now the [Protobuf format](https://developers.google.com/protocol-buffers) in the latest update. Read more about [Ingesting Protobuf data from Kafka to Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ingesting-protobuf-data-from-kafka-to-azure-data-explorer/ba-p/3595793). |
+| September 2022 | **Funnel visuals** | [Funnel is the latest visual we added to Azure Data Explorer dashboards](/azure/data-explorer/dashboard-customize-visuals#funnel) following the feedback we received from customers. |
+| September 2022 | **.NET and Node.js support in Sample App Generator** | The [Azure Data Explorer (ADX) sample app generator wizard](https://dataexplorer.azure.com/oneclick/generatecode?sourceType=file&programingLang=C) is a tool that allows you to [create a working app to ingest and query your data](/azure/data-explorer/sample-app-generator-wizard) in your preferred programming language. Now, generating sample apps in .NET and Node.js is supported along with the previously available options Java and Python. |
+| August 2022 | **Embed ADX dashboards** | The ADX web UI and dashboards be [embedded in an iFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). |
| August 2022 | **Free cluster upgrade option** | You can now [upgrade your Azure Data Explorer free cluster to a full cluster](/azure/data-explorer/start-for-free-upgrade) that removes the storage limitation allowing you more capacity to grow your data. | | August 2022 | **Analyze fresh ADX data from Excel pivot table** | Now you can [Use fresh and unlimited volume of ADX data (Kusto) from your favorite analytic tool, Excel pivot tables](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/use-fresh-and-unlimited-volume-of-adx-data-kusto-from-your/ba-p/3588894). MDX queries generated by the Pivot code, will find their way to the Kusto backend as KQL statements that will aggregate the data as needed by the pivot and back to Excel.| | August 2022 | **Query results - color by value** | Highlight unique data at-a-glance in query results to visually group rows that share identical values for a specific column. Use **Explore results** and **Color by value** to [apply color to rows based on the selected column](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_14).|
This section summarizes recent improvements and features in SQL pools in Azure S
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| September 2022 | **Auto-statistics for OPENROWSET in CSV datasets** | Serverless SQL pool will [automatically create statistics](sql/develop-tables-statistics.md#statistics-in-serverless-sql-pool) for CSV datasets when needed to ensure an optimal query execution plan for OPENROWSET queries. |
+| September 2022 | **MERGE T-SQL syntax** | [T-SQL MERGE syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. MERGE encapsulates INSERTs/UPDATEs/DELETEs into a single statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).|
| August 2022| **Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).| | August 2022| **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).| | August 2022| **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| | August 2022 | **Add SQL objects and users in Lake databases** | New capabilities announced for lake databases in serverless SQL pools: create schemas, views, procedures, inline table-valued functions. You can also database users from your Azure Active Directory domain and assign them to the db_datareader role. For more information, see [Access lake databases using serverless SQL pool in Azure Synapse Analytics](metadat).| | June 2022 | **Result set size limit increase** | The [maximum size of query result sets](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints) in serverless SQL pools has been increased from 200 GB to 400 GB. |
-| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It is no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. |
+| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It's no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. |
| April 2022 | **Cross-subscription restore for Azure Synapse SQL GA** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Restore a dedicated SQL pool to a different subscription](sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff)| | April 2022 | **Recover SQL pool from dropped server or workspace** | With the PowerShell Restore cmdlets in `Az.Sql` and `Az.Synapse` modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, see [Restore a dedicated SQL pool from a deleted Azure Synapse workspace](backuprestore/restore-sql-pool-from-deleted-workspace.md) or [Restore a standalone dedicated SQL pools (formerly SQL DW) from a deleted server](backuprestore/restore-sql-pool-from-deleted-workspace.md), depending on your scenario. | | March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.| | March 2022 | **Parallel execution for CETAS** | Better performance for [CREATE TABLE AS SELECT](sql/develop-tables-cetas.md) (CETAS) and subsequent SELECT statements now made possible by use of parallel execution plans. For examples, see [Better performance for CETAS and subsequent SELECTs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7).| - ## Learn more - [Get started with Azure Synapse Analytics](get-started.md)
This section summarizes recent improvements and features in SQL pools in Azure S
- [Become a Azure Synapse Influencer](https://aka.ms/synapseinfluencers) - [Azure Synapse Analytics terminology](overview-terminology.md) - [Azure Synapse Analytics migration guides](migration-guides/index.yml)-- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
+- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
To enable Insider features:
- **Name**: ReleaseRing - **Data**: insider
- You can do this with PowerShell. On your local device, open an elevated PowerShell prompt and run the following commands:
+ You can do configure the registry with PowerShell. On your local device, open an elevated PowerShell prompt and run the following commands:
```powershell New-Item -Path "HKLM:\SOFTWARE\Microsoft\MSRDC\Policies" -Force
You can check the extension status by visiting a website with media content, suc
:::image type="content" source="./media/mmr-extension-status-popup.png" alt-text="A screenshot of the MMR extension in the Microsoft Edge extension bar.":::
-Another way you can check the extension status is by selecting the extension icon, then selecting **Features supported on this website** from the drop-down menu to see whether the website supports the redirection extension.
+Another way you can check the extension status is by selecting the extension icon, then you'll see a list of **Features supported on this website** with a green check mark if the website supports that feature.
## Teams live events
To use multimedia redirection with Teams live events:
1. Open the link to the Teams live event in either the Edge or Chrome browser.
-1. Make sure you can see a green check mark next to the [multimedia redirection status icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon). If the green check mark is there, MMR is enabled for Teams live events.
+1. Make sure you can see a green play icon as part of the [multimedia redirection status icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon). If the green play icon is there, MMR is enabled for Teams live events.
1. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the native Teams app, MMR won't work. :::image type="content" source="./media/teams-live-events.png" alt-text="A screenshot of the 'Watch the live event in Microsoft Teams' page. The status icon and 'watch on the web instead' options are highlighted in red.":::
+## Enable video playback for all sites
+
+During the preview, multimedia redirection is limited to the sites listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. However you can enable video playback for all sites to allow you to test the feature with other websites. To enable video playback for all sites:
+
+1. Select the extension icon in your browser.
+
+1. Select **Show Advanced Settings**.
+
+1. Toggle **Enable video playback for all sites(beta)** to **on**.
+
+## Redirected video outlines
+
+Redirected video outlines will allow you to highlight the currently redirected video elements. When this is enabled, you will see a bright highlighted boarder around the video element that is being redirected. To enable redirected video outlines:
+
+1. Select the extension icon in your browser.
+
+1. Select **Show Advanced Settings**.
+
+1. Toggle **Redirected video outlines** to **on**. You will need to refresh the webpage for the change to take effect.
+ ## Next steps For more information about multimedia redirection and how it works, see [What is multimedia redirection for Azure Virtual Desktop? (preview)](multimedia-redirection-intro.md).
virtual-desktop Troubleshoot Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-multimedia-redirection.md
The following issues are ones we're already aware of, so you won't need to repor
- Multimedia redirection doesn't currently support protected content, so videos from Netflix, for example, won't work. -- During public preview, multimedia redirection will be disabled on all sites except for the sites listed in [Websites that work with MMR](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection). However, if you have the extension, you can enable multimedia redirection for all websites. We added the extension so organizations can test the feature on their company websites.
+- During public preview, multimedia redirection will be disabled on all sites except for the sites listed in [Websites that work with MMR](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection). However, you can enable multimedia redirection for all websites by following the steps in [Enable video playback for all sites](multimedia-redirection.md#enable-video-playback-for-all-sites). We added the extension so organizations can test the feature on their company websites.
- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window. -- You might run into issue where you are stuck in the loading state on every video site. This is a known issue that we're currently investigating. To temporarily mitigate this issue, enter "logoff" into the Windows Start text field to sign out of Azure Virtual Desktop and restart the session.
+- You might run into issue where you are stuck in the loading state on every video site. This is a known issue that we're currently investigating. To temporarily mitigate this issue, sign out of Azure Virtual Desktop and restart your session.
### The MSI installer doesn't work
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
az vmss extension set \
--name CustomScript \ --resource-group myResourceGroup \ --vmss-name myScaleSet \
- --settings @customConfig.json
+ --settings customConfig.json
``` Each VM instance in the scale set downloads and runs the script from GitHub. In a more complex example, multiple application components and files could be installed. If the scale set is scaled up, the new VM instances automatically apply the same Custom Script Extension definition and install the required application.
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
This article guides you through how to create an Azure [dedicated host](dedicate
- Not all Azure VM SKUs, regions and availability zones support ultra disks, for more information about this topic, see [Azure ultra disks](disks-enable-ultra-ssd.md).
+- Currently ADH does not support ultra disks on following VM series LSv2, M, Mv2, Msv2, Mdsv2, NVv3, NVv4 (though these VMs support ultra disks on multi tenant VMs).
+ - The fault domain count of the virtual machine scale set can't exceed the fault domain count of the host group. ## Create a host group
You can also decide to use both availability zones and fault domains.
Enabling ultra disks is a host group level setting and can't be changed after a host group is created.
-If you intend to use LSv2 or M series VMs, with ultra disks on dedicated hosts, set host group's **Fault domain count** to **1**.
### [Portal](#tab/portal)
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
Azure Hybrid Benefit converts BYOS billing to pay-as-you-go, so that you pay onl
Azure Hybrid Benefit for BYOS virtual machines is available to all RHEL and SLES virtual machines that come from a custom image. It's also available to all RHEL and SLES BYOS virtual machines that come from an Azure Marketplace image.
-Azure Dedicated Host instances and SQL hybrid benefits aren't eligible for Azure Hybrid Benefit if you're already using Azure Hybrid Benefit with Linux virtual machines. Virtual machine scale sets are reserved instances, so they also can't use Azure Hybrid Benefit for BYOS virtual machines.
+Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. Azure Hybrid Benefit for BYOS virtual machines does not support virtual machine scale sets and reserved instances (RIs).
## Get started
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
Title: Linux distributions endorsed on Azure
-description: Learn about Linux on Azure-endorsed distributions, including guidelines for Ubuntu, CentOS, Oracle, and SUSE.
+description: Learn about Linux on Azure-endorsed distributions, including information about Ubuntu, CentOS, Oracle, Flatcar, Debian, Red Hat, and SUSE.
- Previously updated : 04/06/2021 Last updated : 07/24/2022 -++ # Endorsed Linux distributions on Azure
Kinvolk is the team behind Flatcar Container Linux, continuing the original Core
[https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html)
-Oracle's strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle's partnership with Microsoft enables customers to deploy Oracle software in Microsoft public and private clouds with the confidence of certification and support from Oracle. Oracle's commitment and investment in Oracle public and private cloud solutions is unchanged.
+Oracle's strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle's partnership with Microsoft enables customers to deploy Oracle software to Microsoft public and private clouds with the confidence of certification and support from Oracle. Oracle's commitment and investment in Oracle public and private cloud solutions is unchanged.
### Red Hat
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run PowerShell sc
## Benefits
-You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machine-run-commands), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
+You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines/run-command), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
To prevent low-level hardware access that can result in security vulnerabilities
On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1. If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
-## Accelerated Networking on HB, HC, HBv2, HBv3 and NDv2
+## Accelerated Networking on HB, HC, HBv2, HBv3, NDv2 and NDv4
+
+[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md), [NDv2](../../ndv2-series.md) and [NDv4](../../nda100-v4-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X).
+
+The simplest solution currently is to use the latest HPC-X on the CentOS-HPC VM images where we rename the IB/AN interfaces accordingly or to run the [script](https://github.com/Azure/azhpc-images/blob/master/common/install_azure_persistent_rdma_naming.sh) to rename the InfiniBand interface.
-[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
More details on this are available on this [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-and-hbv2/ba-p/2067965) with instructions on how to address any observed issues. ## InfiniBand driver installation on non-SR-IOV VMs
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/setup-mpi.md
make -j 8 && make install
``` > [!NOTE]
-> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md#accelerated-networking-on-hb-hc-hbv2-hbv3-and-ndv2) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
+> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
## HPC-X
virtual-network Update Virtual Network Peering Address Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/update-virtual-network-peering-address-space.md
Last updated 07/10/2022
#Customer Intent: As a cloud engineer, I need to update the address space for peered virtual networks without incurring downtime from the current address spaces. I wish to do this in the Azure Portal. + # Updating the address space for a peered virtual network - Portal In this article, you'll learn how to update a peered virtual network by adding or deleting an address space without incurring downtime interruptions using the Azure portal. This feature is useful when you need to grow or resize the virtual networks in Azure after scaling your workloads.
In this section, you'll modify the address range prefix for an existing address
2. From the list of virtual networks, select the virtual network where you're adding an address range. 1. Select **Address space** under settings. 1. On the **Address space** page, change the address range prefix per your requirements, and select **Save** when finished.+
+ :::image type="content" source="media/update-virtual-network-peering-address-space/update-address-prefix-thumb.png" alt-text="Image of the Address Space page for changing a sugnet's prefix" lightbox="media/update-virtual-network-peering-address-space/update-address-prefix-full.png":::
1. Select **Peerings** under Settings and select the checkbox for the peering requiring synchronization.
-1. Select **Sync from the task bar.
+1. Select **Sync** from the task bar.
+
+ :::image type="content" source="media/update-virtual-network-peering-address-space/sync-peering-thumb.png" alt-text="Image of the Peerings page where you re-syncronize a peering connection." lightbox="media/update-virtual-network-peering-address-space/sync-peering-full.png":::
1. Select the name of the other peered virtual network under **Peer**. 1. Under **Settings** of the peered virtual network, select **Address space** and verify that the Address space listed has been updated.+
+ :::image type="content" source="media/update-virtual-network-peering-address-space/verify-address-space-thumb.png" alt-text="Image the Address Space page where you verify the address space has changed." lightbox="media/update-virtual-network-peering-address-space/verify-address-space-full.png":::
> [!NOTE] > When an update is made to the address space for a virtual network, you will need to sync the virtual network peer for each remote peered VNet to learn of the new address space updates. We recommend that you run sync after every resize address space operation instead of performing multiple resizing operations and then running the sync operation.
In this section, you'll add an IP address range to the IP address space of a pee
2. From the list of virtual networks, select the virtual network where you're adding an address range. 3. Select **Address space**, under **Settings**. 4. On the **Address space** page, add the address range per your requirements, and select **Save** when finished.+
+ ::image type="content" source="media/update-virtual-network-peering-address-space/add-address-range-thumb.png" alt-text="Image of the Address Space page used to add an IP address range." lightbox="media/update-virtual-network-peering-address-space/add-address-range-full.png":::
1. Select **Peering**, under **Settings** and **Sync** the peering connection. 1. As previously done, verify the address space is updated on the remote virtual network. ## Delete an address range
In this task, you'll delete an IP address range from an address space. First, yo
2. From the list of virtual networks, select the virtual network where you're removing an address range. 1. Select **Subnets**, under **settings** 1. On the right of the address range you want to remove, select **...** and select **Delete** from the dropdown list. Choose **Yes** to confirm deletion.+
+ :::image type="content" source="media/update-virtual-network-peering-address-space/delete-subnet.png" alt-text="Image of Subnet page and menu for deleting a subnet.":::
1. Select **Save** when you've completed all changes. 1. 1. Select **Peering**, under **Settings** and **Sync** the peering connection. 1. As previously done, verify the address space is updated on the remote virtual network.
In this task, you'll delete an IP address range from an address space. First, yo
- [Links]() +
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
The validation of the client certificate is performed by the VPN gateway and hap
### Authenticate using native Azure Active Directory authentication
-Azure AD authentication allows users to connect to Azure using their Azure Active Directory credentials. Native Azure AD authentication is only supported for OpenVPN protocol and Windows 10 and later and also requires the use of the [Azure VPN Client](https://go.microsoft.com/fwlink/?linkid=2117554).
+Azure AD authentication allows users to connect to Azure using their Azure Active Directory credentials. Native Azure AD authentication is only supported for OpenVPN protocol and also requires the use of the [Azure VPN Client](https://go.microsoft.com/fwlink/?linkid=2117554). The supported client operation systems are Windows 10 or later and macOS.
With native Azure AD authentication, you can leverage Azure AD's conditional access as well as Multi-Factor Authentication (MFA) features for VPN.
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
1. In the left pane, locate the **VPN connection**, then click **Connect**.
+Azure VPN client provides high availability by allowing you to add a secondary VPN client profile, providing a more resilient way to access VPN. You can choose to add a secondary client profile using any of the already imported client profiles and that **enables the high availability** option for windows. In case of any **region outage** or failure to connect to the primary VPN client profile, Azure VPN provides the capability to auto-connect to the secondary client profile without causing any disruptions.
+ ## <a name="openvpn"></a>OpenVPN - OpenVPN Client steps This section applies to certificate authentication configurations that are configured to use the OpenVPN tunnel type. The following steps help you configure the **OpenVPN &reg; Protocol** client and connect to your VNet.