Updates from: 04/05/2022 01:11:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Web Api Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-web-api-application.md
Previously updated : 08/24/2021 Last updated : 03/30/2022
# Add a web API application to your Azure Active Directory B2C tenant
- Register web API resources in your tenant so that they can accept and respond to requests by client applications that present an access token. This article shows you how to register a web API in Azure Active Directory B2C (Azure AD B2C).
+This article shows you how to register web API resources in your Azure Active Directory B2C (Azure AD B2C) tenant so that they can accept and respond to requests by client applications that present an access token.
-To register an application in your Azure AD B2C tenant, you can use our new unified **App registrations** experience or our legacy **Applications (Legacy)** experience. [Learn more about the new experience](./app-registrations-training-guide.md).
+To register an application in your Azure AD B2C tenant, you can use Azure portal's new unified **App registrations** experience the legacy **Applications (Legacy)** experience. [Learn more about the new experience](./app-registrations-training-guide.md).
#### [App registrations](#tab/app-reg-ga/)
To register an application in your Azure AD B2C tenant, you can use our new unif
1. Select **Register**. 1. Record the **Application (client) ID** for use in your web API's code.
-If you have an application that implements the implicit grant flow, for example a [JavaScript-based single-page application (SPA)](tutorial-register-spa.md), you can enable the flow by following these steps:
-
-1. Under **Manage**, select **Authentication**.
-1. Under **Implicit grant**, select both the **Access tokens** and **ID tokens** check boxes.
-1. Select **Save**.
#### [Applications (Legacy)](#tab/applications-legacy/)
If you have an application that implements the implicit grant flow, for example
1. For **Include web app/ web API** and **Allow implicit flow**, select **Yes**. 1. For **Reply URL**, enter an endpoint where Azure AD B2C should return any tokens that your application requests. In your production application, you might set the reply URL to a value such as `https://localhost:44332`. For testing purposes, set the reply URL to `https://jwt.ms`. 1. For **App ID URI**, enter the identifier used for your web API. The full identifier URI including the domain is generated for you. For example, `https://contosotenant.onmicrosoft.com/api`.
-1. Click **Create**.
+1. Select **Create**.
1. On the properties page, record the application ID that you'll use when you configure the web application. * * *
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
Previously updated : 06/17/2021 Last updated : 03/30/2022 # Application types that can be used in Active Directory B2C
-Azure Active Directory B2C (Azure AD B2C) supports authentication for a variety of modern application architectures. All of them are based on the industry standard protocols [OAuth 2.0](protocols-overview.md) or [OpenID Connect](protocols-overview.md). This article describes the types of applications that you can build, independent of the language or platform you prefer. It also helps you understand the high-level scenarios before you start building applications.
+Azure Active Directory B2C (Azure AD B2C) supports authentication for various modern application architectures. All of them are based on the industry standard protocols [OAuth 2.0](protocols-overview.md) or [OpenID Connect](protocols-overview.md). This article describes the types of applications that you can build, independent of the language or platform you prefer. It also helps you understand the high-level scenarios before you start building applications.
Every application that uses Azure AD B2C must be registered in your [Azure AD B2C tenant](tutorial-create-tenant.md) by using the [Azure portal](https://portal.azure.com/). The application registration process collects and assigns values, such as:
In a web application, each execution of a [policy](user-flow-overview.md) takes
Validation of the `id_token` by using a public signing key that is received from Azure AD is sufficient to verify the identity of the user. This process also sets a session cookie that can be used to identify the user on subsequent page requests.
-To see this scenario in action, try one of the web application sign-in code samples in our [Getting started section](overview.md).
+To see this scenario in action, try one of the web application sign in code samples in our [Getting started section](overview.md).
-In addition to facilitating simple sign-in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
+In addition to facilitating simple sign in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
## Single-page applications Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or a SPA framework such as Angular, Vue, and React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications.
Many modern web applications are built as client-side single-page applications (
Azure AD B2C provides **two** options to enable single-page applications to sign in users and get tokens to access back-end services or web APIs: ### Authorization code flow (with PKCE)-- [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). The authorization code flow allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users. +
+[OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md) allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users.
This is the **recommended** approach. Having limited-lifetime refresh tokens helps your application adapt to [modern browser cookie privacy limitations](../active-directory/develop/reference-third-party-cookies-spas.md), like Safari ITP.
-To take advantage of this flow, your application can use an authentication library that supports it, like [MSAL.js 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
+To take advantage of this flow, your application can use an authentication library that supports it, like [MSAL.js 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
<!-- ![Single-page applications-auth](./media/tutorial-single-page-app/spa-app-auth.svg) --> ![Single-page applications-auth](./media/tutorial-single-page-app/active-directory-oauth-code-spa.png) ### Implicit grant flow-- [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow does not return a **Refresh token**.
-This authentication flow does not include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
+Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
+
+This authentication flow doesn't include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
## Web APIs
In this flow, the application executes [policies](user-flow-overview.md) and rec
#### Daemons/server-side applications
-Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using the application's identity (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow is not the same as on-behalf-flow and on-behalf-flow should not be used for server-to-server authentication.
+Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using their identities (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow isn't the same as on-behalf-flow and on-behalf-flow shouldn't be used for server-to-server authentication.
-Although the OAuth 2.0 client credentials grant flow is not currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token (https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token) endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
+Although the OAuth 2.0 client credentials grant flow isn't currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token (https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token) endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
To set up client credential flow, see [Azure Active Directory v2.0 and the OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). A successful authentication results in the receipt of a token formatted so that it can be used by Azure AD as described in [Azure AD token reference](../active-directory/develop/id-tokens.md).
For instructions on registering a management application, see [Manage Azure AD B
Many architectures include a web API that needs to call another downstream web API, where both are secured by Azure AD B2C. This scenario is common in native clients that have a Web API back-end and calls a Microsoft online service such as the Microsoft Graph API.
-This chained web API scenario can be supported by using the OAuth 2.0 JWT bearer credential grant, also known as the on-behalf-of flow. However, the on-behalf-of flow is not currently implemented in the Azure AD B2C.
+This chained web API scenario can be supported by using the OAuth 2.0 JWT bearer credential grant, also known as the on-behalf-of flow. However, the on-behalf-of flow isn't currently implemented in the Azure AD B2C.
## Next steps
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 04/03/2022 Last updated : 03/30/2022
To create the SPA registration, do the following:
1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code.
-### Step 2.5: Grant permissions
+### Step 2.5: Grant API permissions to the web app
[!INCLUDE [active-directory-b2c-app-integration-grant-permissions](../../includes/active-directory-b2c-app-integration-grant-permissions.md)]
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
Previously updated : 09/15/2021 Last updated : 03/30/2022
This article uses a sample Angular single-page application (SPA) to illustrate h
OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign in a user to an application. This Angular sample uses [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and the [MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser). MSAL is a Microsoft-provided library that simplifies adding authentication and authorization support to Angular SPAs.
-### Sign-in flow
+### Sign in flow
The sign-in flow involves the following steps:
-1. The user opens the app and selects **Sign-in**.
+1. The user opens the app and selects **Sign in**.
1. The app starts an authentication request and redirects the user to Azure AD B2C. 1. The user [signs up or signs in](add-sign-up-and-sign-in-policy.md) and [resets the password](add-password-reset-policy.md), or signs in with a [social account](add-identity-provider.md). 1. Upon successful sign-in, Azure AD B2C returns an authorization code to the app. The app takes the following actions:
The following diagram describes the app registrations and the app architecture.
[!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
-### Sign-out flow
+### Sign out flow
[!INCLUDE [active-directory-b2c-app-integration-sign-out-flow](../../includes/active-directory-b2c-app-integration-sign-out-flow.md)]
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
Previously updated : 10/25/2021 Last updated : 03/30/2022
This article uses a sample JavaScript single-page application (SPA) to illustrat
## Overview
-OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use it to securely sign a user in to an application. This single-page application sample uses [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) and the OIDC PKCE flow. MSAL.js is a Microsoft provided library that simplifies adding authentication and authorization support to SPAs.
+OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use it to securely sign a user into an application. This SPA sample uses [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) and the OIDC PKCE flow. MSAL.js is a Microsoft provided library that simplifies adding authentication and authorization support to SPAs.
-### Sign-in flow
+### Sign in flow
The sign-in flow involves the following steps:
The app architecture and registrations are illustrated in the following diagram:
[!INCLUDE [active-directory-b2c-app-integration-call-api](../../includes/active-directory-b2c-app-integration-call-api.md)]
-### Sign-out flow
+### Sign out flow
[!INCLUDE [active-directory-b2c-app-integration-sign-out-flow](../../includes/active-directory-b2c-app-integration-sign-out-flow.md)]
In this step, you create the SPA and the web API application registrations, and
### Step 2.3: Register the SPA
-To create the SPA registration, do the following:
+To create the SPA registration, use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
To create the SPA registration, do the following:
1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox. 1. Select **Register**.
-### Step 2.4: Enable the implicit grant flow
-Next, enable the implicit grant flow:
+Record the **Application (client) ID** to use later, when you configure the web application.
-1. Under **Manage**, select **Authentication**.
+![Screenshot of the web app Overview page for recording your web application ID.](./media/configure-authentication-sample-web-app/get-azure-ad-b2c-app-id.png)
-1. Select **Try out the new experience** (if shown).
-1. Under **Implicit grant**, select the **ID tokens** checkbox.
+### Step 2.4: Enable the implicit grant flow
-1. Select **Save**.
+In your own environment, if your SPA app uses MSAL.js 1.3 or earlier and the implicit grant flow or you configure [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
- Record the **Application (client) ID** to use later, when you configure the web application.
+1. In the left menu, under **Manage**, select **Authentication**.
+
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+
+1. Select **Save**.
- ![Screenshot of the web app Overview page for recording your web application ID.](./media/configure-authentication-sample-web-app/get-azure-ad-b2c-app-id.png)
+If your app uses MSAL.js 2.0 or later, don't enable implicit flow grant as MSAL.js 2.0+ supports the authorization code flow with PKCE. The SPA app in this article uses PKCE flow, and so you don't need to enable implicit grant flow.
### Step 2.5: Grant permissions
Next, enable the implicit grant flow:
## Step 3: Get the SPA sample code
-This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API.
+This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign in. Then the app acquires an access token and calls a protected web API.
To get the SPA sample code, you can do either of the following:
You're now ready to test the single-page application's scoped access to the API.
![Screenshot of the SPA sample app displayed in the browser window.](./media/configure-authentication-sample-spa-app/sample-app-sign-in.png)
-1. Complete the sign-up or sign-in process. After you've logged in successfully, you should see the "User \<your username> logged in" message.
+1. Complete the sign-up or sign in process. After you've logged in successfully, you should see the "User \<your username> logged in" message.
1. Select the **Call API** button. The SPA sends the access token in a request to the protected web API, which returns the display name of the logged-in user: ![Screenshot of the SPA in a browser window, showing the username JSON result that's returned by the API.](./media/configure-authentication-sample-spa-app/sample-app-result.png)
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
This article uses a sample ASP.NET web application to illustrate how to add Azur
OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. You can use OIDC to securely sign users in to an application. This web app sample uses [Microsoft Identity Web](https://www.nuget.org/packages/Microsoft.Identity.Web). Microsoft Identity Web is a set of ASP.NET Core libraries that simplify adding authentication and authorization support to web apps.
-The sign-in flow involves the following steps:
+The sign in flow involves the following steps:
1. Users go to the web app and select **Sign-in**. 1. The app initiates an authentication request and redirects users to Azure AD B2C.
The sign-in flow involves the following steps:
When the ID token is expired or the app session is invalidated, the app initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again.
-### Sign-out
+### Sign out
[!INCLUDE [active-directory-b2c-app-integration-sign-out-flow](../../includes/active-directory-b2c-app-integration-sign-out-flow.md)]
To enable your application to sign in with Azure AD B2C, register your app in th
During app registration, you'll specify the *redirect URI*. The redirect URI is the endpoint to which users are redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID and the redirect URI to create authentication requests.
-### Step 2.1: Register the app
-
-To create the web app registration, do the following:
+To create the web app registration, use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
To create the web app registration, do the following:
![Screenshot of the web app Overview page for recording your web application ID.](./media/configure-authentication-sample-web-app/get-azure-ad-b2c-app-id.png)
-### Step 2.2: Enable ID tokens
-
-For web apps that request an ID token directly from Azure AD B2C, enable the implicit grant flow in the app registration.
-
-1. On the left pane, under **Manage**, select **Authentication**.
-1. Under **Implicit grant**, select the **ID tokens (used for implicit and hybrid flows)** and **Access tokens (used for implicit flows)** checkboxes.
-1. Select **Save**.
- ## Step 3: Get the web app sample [Download the zip file](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/refs/heads/master.zip), or clone the sample web application from GitHub.
Your final configuration file should look like the following JSON:
:::image type="content" source="./media/configure-authentication-sample-web-app/web-app-sign-in.png" alt-text="Screenshot of the sign in and sign up button on the project Welcome page.":::
-1. Complete the sign-up or sign-in process.
+1. Complete the sign-up or sign in process.
After successful authentication, you'll see your display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, select **Claims**.
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
To get started, you'll need:
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application.
::: zone-end ::: zone pivot="b2c-custom-policy"
To get started, you'll need:
- A BlokSec [trial account](https://bloksec.com/). -- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application.
- Complete the steps in the [**Get started with custom policies in Azure Active Directory B2C**](./tutorial-create-user-flows.md?pivots=b2c-custom-policy). ::: zone-end
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
To get started, you'll need:
### Part - 1 Create an application registration in Haventec
-If you haven't already done so, [register](tutorial-register-applications.md) a web application, and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant).
+If you haven't already done so, [register](tutorial-register-applications.md) a web application.
### Part - 2 Add a new Identity provider in Azure AD B2C
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
In this scenario, Trusona acts as an identity provider for Azure AD B2C to enabl
| Step | Description | ||| |1 | A user attempts to sign in to or sign up with the application. The user is authenticated via the Azure AD B2C sign-up and sign-in policy. During sign-up, the user's previously verified email address from the Trusona app is used. |
-|2 | Azure B2C redirects the user to the Trusona OpenID Connect (OIDC) identity provider using the implicit flow. |
+|2 | Azure B2C redirects the user to the Trusona OpenID Connect (OIDC) identity provider. |
|3 | For desktop PC-based logins, Trusona displays a unique, stateless, animated, and dynamic QR code for scanning with the Trusona app. For mobile-based logins, Trusona uses a "deep link" to open the Trusona app. These two methods are used for device and ultimately user discovery. | |4 | The user scans the displayed QR code with the Trusona app. | |5 | The user's account is found in the Trusona cloud service and the authentication is prepared. |
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
Previously updated : 06/15/2021 Last updated : 03/30/2022
-# Publish your Azure AD B2C app to the Azure AD app gallery
+# Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery
The Azure Active Directory (Azure AD) app gallery is a catalog of thousands of apps. The app gallery makes it easy to deploy and configure single sign-on (SSO) and automate user setup. You can find popular cloud apps in the gallery, such as Workday, ServiceNow, and Zoom.
-This article describes how to publish your Azure Active Directory B2C (Azure AD B2C) app in the Azure AD app gallery. When your app is published, it's listed among the options that customers can choose from when they're adding apps to their Azure AD tenant.
+This article describes how to publish your Azure Active Directory B2C (Azure AD B2C) app in the Azure AD app gallery. When you publish your app, it's listed among the options that customers can choose from when they're adding apps to their Azure AD tenant.
Here are some benefits of adding your Azure AD B2C app to the app gallery:
Here are some benefits of adding your Azure AD B2C app to the app gallery:
- Customers can assign the app to various users and groups within their organization. - The tenant administrator can grant tenant-wide admin consent to your app.
-## Sign-in flow overview
+## Sign in flow overview
-The sign-in flow involves the following steps:
+The sign in flow involves the following steps:
-1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app, which opens the app sign-in URL.
-1. The app sign-in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
+1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app. The app opens the app sign in URL.
+1. The app sign in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
1. Users choose to sign in with their Azure AD "Corporate" account. Azure AD B2C takes them to the Azure AD authorization endpoint, where they sign in with their work account.
-1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting users to sign in again. If the Azure AD session expires or becomes invalid, users are prompted to sign in again.
+1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting users to sign in again. Otherwise, users are prompted to sign in again.
![Diagram of the sign-in OpenID connect flow.](./media/publish-app-to-azure-ad-app-gallery/app-gallery-sign-in-flow.png)
Depending on the users' SSO session and Azure AD identity settings, they might b
- Complete multifactor authentication. - Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When consent is granted, the consent page won't be presented to users.
-Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
+Upon successful sign in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
## Prerequisites
Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C
## Step 1: Register your application in Azure AD B2C
-To enable sign-in to your app with Azure AD B2C, register your app in the Azure AD B2C directory. Registering your app establishes a trust relationship between the app and Azure AD B2C.
+To enable sign in to your app with Azure AD B2C, register your app in the Azure AD B2C directory. Registering your app establishes a trust relationship between the app and Azure AD B2C.
-If you haven't already done so, [register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). Later, you'll register this app with the Azure app gallery.
+If you haven't already done so, [register a web application](tutorial-register-applications.md). Later, you'll register this app with the Azure app gallery.
-## Step 2: Set up sign-in for multitenant Azure AD
+## Step 2: Set up sign in for multitenant Azure AD
-To allow employees and consumers from any Azure AD tenant to sign in by using Azure AD B2C, follow the guidance for [setting up sign-in for multitenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy).
+To allow employees and consumers from any Azure AD tenant to sign in by using Azure AD B2C, follow the guidance for [setting up sign in for multitenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy).
## Step 3: Prepare your app
-In your app, copy the URL of the sign-in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign-in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
+In your app, copy the URL of the sign in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
In production environments, the app registration redirect URI is ordinarily a publicly accessible endpoint where your app is running, such as `https://woodgrovedemo.com/Account/SignIn`. The reply URL must begin with `https`.
Finally, add the multitenant app to the Azure AD app gallery. Follow the instruc
|What feature would you like to enable when listing your application in the gallery? | Select **Federated SSO (SAML, WS-Fed & OpenID Connect)**. | | Select your application federation protocol| Select **OpenID Connect & OAuth 2.0**. | | Application (Client) ID | Provide the ID of [your Azure AD B2C application](#step-1-register-your-application-in-azure-ad-b2c). |
- | Application sign-in URL|Provide the app sign-in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
+ | Application sign in URL|Provide the app sign in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
| Multitenant| Select **Yes**. | | | |
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
Title: "Quickstart: Set up sign-in for a desktop app"
+ Title: "Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C"
-description: In this Quickstart, run a sample WPF desktop application that uses Azure Active Directory B2C to provide account sign-in.
+description: In this Quickstart, run a sample WPF desktop application that uses Azure Active Directory B2C to provide account sign in.
-# Quickstart: Set up sign-in for a desktop app using Azure Active Directory B2C
+# Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to keep your application, business, and customers protected. Azure AD B2C enables your applications to authenticate to social accounts and enterprise accounts using open standard protocols. In this quickstart, you use a Windows Presentation Foundation (WPF) desktop application to sign in using a social identity provider and call an Azure AD B2C protected web API.
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to
## Sign in using your account
-1. Click **Sign in** to start the **Sign Up or Sign In** workflow.
+1. Select **Sign in** to start the **Sign Up or Sign In** workflow.
![Screenshot of the sample WPF application](./media/quickstart-native-app-desktop/wpf-sample-application.png) The sample supports several sign-up options. These options include using a social identity provider or creating a local account using an email address. For this quickstart, use a social identity provider account from either Facebook, Google, or Microsoft.
-2. Azure AD B2C presents a sign-in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, click the button of the identity provider you want to use.
+2. Azure AD B2C presents a sign in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
![Sign In or Sign Up page showing identity providers](./media/quickstart-native-app-desktop/sign-in-or-sign-up-wpf.png) You authenticate (sign in) using your social account credentials and authorize the application to read information from your social account. By granting access, the application can retrieve profile information from the social account such as your name and city.
-2. Finish the sign-in process for the identity provider.
+2. Finish the sign in process for the identity provider.
Your new account profile details are pre-populated with information from your social account.
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to
Azure AD B2C provides functionality to allow users to update their profiles. The sample web app uses an Azure AD B2C edit profile user flow for the workflow.
-1. In the application menu bar, click **Edit profile** to edit the profile you created.
+1. In the application menu bar, select **Edit profile** to edit the profile you created.
![Edit profile button highlighted in WPF sample app](./media/quickstart-native-app-desktop/edit-profile-wpf.png) 2. Choose the identity provider associated with the account you created. For example, if you used Facebook as the identity provider when you created your account, choose Facebook to modify the associated profile details.
-3. Change your **Display name** or **City**, and then click **Continue**.
+3. Change your **Display name** or **City**, and then select **Continue**.
A new access token is displayed in the *Token info* text box. If you want to verify the changes to your profile, copy and paste the access token into the token decoder https://jwt.ms. ## Access a protected API resource
-Click **Call API** to make a request to the protected resource.
+Select **Call API** to make a request to the protected resource.
![Call API](./media/quickstart-native-app-desktop/call-api-wpf.png)
You can use your Azure AD B2C tenant if you plan to try other Azure AD B2C quick
In this quickstart, you used a sample desktop application to:
-* Sign in with a custom login page
+* Sign in with a custom sign in page
* Sign in with a social identity provider * Create an Azure AD B2C account * Call a web API protected by Azure AD B2C
active-directory-b2c Tokens Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md
Previously updated : 03/03/2022 Last updated : 03/30/2022
A [registered application](tutorial-register-applications.md) receives tokens an
- `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/oauth2/v2.0/token` Security tokens that your application receives from Azure AD B2C can come from the `/authorize` or `/token` endpoints. When ID tokens are acquired from the:-- `/authorize` endpoint, it's done using the [implicit flow](implicit-flow-single-page-application.md), which is often used for users signing in to JavaScript-based web applications.
+- `/authorize` endpoint, it's done using the [implicit flow](implicit-flow-single-page-application.md), which is often used for users signing in to JavaScript-based web applications. However, if your app uses [MSAL.js 2.0 or later](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser), don't enable implicit flow grant in your app registration as MSAL.js 2.0+ supports the authorization code flow with PKCE.
- `/token` endpoint, it's done using the [authorization code flow](openid-connect.md#get-a-token), which keeps the token hidden from the browser. ## Claims
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Previously updated : 03/01/2022 Last updated : 03/30/2022 zone_pivot_groups: b2c-policy-type
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
Title: "Tutorial: Register an application"
+ Title: "Tutorial: Register a web application in Azure Active Directory B2C"
description: Follow this tutorial to learn how to register a web application in Azure Active Directory B2C using the Azure portal.
Previously updated : 09/20/2021 Last updated : 03/30/2022
For a web application, you need to create an application secret. The client secr
## Enable ID token implicit grant
-The defining characteristic of the implicit grant is that tokens, such as ID and access tokens, are returned directly from Azure AD B2C to the application. For web apps, such as ASP.NET Core web apps and [https://jwt.ms](https://jwt.ms), that request an ID token directly from the authorization endpoint, enable the implicit grant flow in the app registration.
+If you register this app and configure it with [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
1. In the left menu, under **Manage**, select **Authentication**.
-1. Under Implicit grant, select both the **Access tokens** and **ID tokens** check boxes.
+
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+ 1. Select **Save**. ## Next steps
active-directory-b2c Tutorial Register Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md
Title: "Tutorial: Register a single-page application"
+ Title: Register a single-page application (SPA) in Azure Active Directory B2C
-description: Follow this tutorial to learn how to register a single-page application (SPA) in Azure Active Directory B2C using the Azure portal.
+description: Follow this guide to learn how to register a single-page application (SPA) in Azure Active Directory B2C using the Azure portal.
- Previously updated : 09/20/2021+ Last updated : 03/30/2022
-# Tutorial: Register a single-page application (SPA) in Azure Active Directory B2C
+# Register a single-page application (SPA) in Azure Active Directory B2C
-Before your [applications](application-types.md) can interact with Azure Active Directory B2C (Azure AD B2C), they must be registered in a tenant that you manage. This tutorial shows you how to register a single-page application ("SPA") using the Azure portal.
+Before your [applications](application-types.md) can interact with Azure Active Directory B2C (Azure AD B2C), they must be registered in a tenant that you manage. This guide shows you how to register a single-page application ("SPA") using the Azure portal.
## Overview of authentication options
-Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or a SPA framework such as Angular, Vue, and React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications.
+Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or an SPA framework such as Angular, Vue, and React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications.
Azure AD B2C provides **two** options to enable single-page applications to sign in users and get tokens to access back-end services or web APIs: ### Authorization code flow (with PKCE)-- [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). The authorization code flow allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users. +
+[OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md) allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users.
This is the **recommended** approach. Having limited-lifetime refresh tokens helps your application adapt to [modern browser cookie privacy limitations](../active-directory/develop/reference-third-party-cookies-spas.md), like Safari ITP.
To take advantage of this flow, your application can use an authentication libra
![Single-page applications-auth](./media/tutorial-single-page-app/spa-app-auth.svg) ### Implicit grant flow-- [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow does not return a **Refresh token**. +
+Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow or your applications is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
![Single-page applications-implicit](./media/tutorial-single-page-app/spa-app.svg)
-This authentication flow does not include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
+This authentication flow doesn't include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-tenant.md), create one now. You can use an existing Azure AD B2C tenant.
+- If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-tenant.md), create one now. You can use an existing Azure AD B2C tenant.
## Register the SPA application
If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-te
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)** 1. Under **Redirect URI**, select **Single-page application (SPA)**, and then enter `https://jwt.ms` in the URL text box.
- The redirect URI is the endpoint to which the user is sent by the authorization server (Azure AD B2C, in this case) after completing its interaction with the user, and to which an access token or authorization code is sent upon successful authorization. In a production application, it's typically a publicly accessible endpoint where your app is running, like `https://contoso.com/auth-response`. For testing purposes like this tutorial, you can set it to `https://jwt.ms`, a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser). During app development, you might add the endpoint where your application listens locally, like `http://localhost:5000`. You can add and modify redirect URIs in your registered applications at any time.
+ The redirect URI is the endpoint to where the user is sent by the authorization server (Azure AD B2C, in this case) after completing its interaction with the user. Also, the redirect URI endpoint receives the access token or authorization code upon successful authorization. In a production application, it's typically a publicly accessible endpoint where your app is running, like `https://contoso.com/auth-response`. For testing purposes like this guide, you can set it to `https://jwt.ms`, a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser). During app development, you might add the endpoint where your application listens locally, like `http://localhost:5000`. You can add and modify redirect URIs in your registered applications at any time.
The following restrictions apply to redirect URIs: * The reply URL must begin with the scheme `https`, unless using `localhost`.
- * The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
+ * The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, don't specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box. 1. Select **Register**. ## Enable the implicit flow
-If using the implicit flow, you need to enable the implicit grant flow in the app registration.
+
+If your SPA app uses MSAL.js 1.3 or earlier and the implicit grant flow or you configure the [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Implicit grant**, select both the **Access tokens** and **ID tokens** check boxes.
+
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+ 1. Select **Save**.
+If your app uses MSAL.js 2.0 or later, don't enable implicit flow grant as MSAL.js 2.0+ supports the authorization code flow with PKCE.
+ ## Migrate from the implicit flow
-If you have an existing application that uses the implicit flow, we recommend migrating to using the authorization code flow by using a framework that supports it, like [MSAL.js 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
+If you've an existing application that uses the implicit flow, we recommend that you migrate to use the authorization code flow by using a framework that supports it, like [MSAL.js 2.0+](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
-When all your production single-page applications represented by an app registration are using the authorization code flow, disable the implicit grant flow settings.
+When all your production SPA represented by an app registration starts using the authorization code flow, disable the implicit grant flow settings as follows:
1. In the left menu, under **Manage**, select **Authentication**. 1. Under **Implicit grant**, de-select both the **Access tokens** and **ID tokens** check boxes.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 03/03/2022 Last updated : 04/04/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## March 2022
+
+### New articles
+
+- [Configure eID-Me with Azure Active Directory B2C for identity verification](partner-eid-me.md)
+- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)
+- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md)
+
+### Updated articles
+
+- [Configure eID-Me with Azure Active Directory B2C for identity verification](partner-eid-me.md)
+- [Language customization in Azure Active Directory B2C](language-customization.md)
+- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md)
+- [Set up direct sign in using Azure Active Directory B2C](direct-signin.md)
+- [Single-page application sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C](implicit-flow-single-page-application.md)
+- [Azure AD B2C: Authentication protocols](protocols-overview.md)
+- [Configure Akamai with Azure Active Directory B2C](partner-akamai.md)
+- [Cookies definitions for Azure AD B2C](cookie-definitions.md)
+- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)
+- [Azure Active Directory B2C: What's new](whats-new-docs.md)
+- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)
+- [Options for registering a SAML application in Azure AD B2C](saml-service-provider-options.md)
+ ## February 2022 ### New articles
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 03/04/2022 Last updated : 04/04/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
-* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Azure AD DS resources.
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 03/08/2022 Last updated : 04/04/2022 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
-* You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Azure AD DS resources.
* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain. Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 07/13/2021 Last updated : 04/04/2022
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 03/09/2022 Last updated : 04/04/2022
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2022
+
+### Updated articles
+
+- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)
+- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)
+- [Tutorial: Develop a sample SCIM endpoint in Azure Active Directory](use-scim-to-build-users-and-groups-endpoints.md)
+- [Skip deletion of user accounts that go out of scope in Azure Active Directory](skip-out-of-scope-deletions.md)
+- [Azure Active Directory application provisioning: What's new](whats-new-docs.md)
++ ## February 2022 ### Updated articles
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Allow access to the following URLs:
| `*.msappproxy.net` <br> `*.servicebus.windows.net` | 443/HTTPS | Communication between the connector and the Application Proxy cloud service | | `crl3.digicert.com` <br> `crl4.digicert.com` <br> `ocsp.digicert.com` <br> `crl.microsoft.com` <br> `oneocsp.microsoft.com` <br> `ocsp.msocsp.com`<br> | 80/HTTP | The connector uses these URLs to verify certificates. | | `login.windows.net` <br> `secure.aadcdn.microsoftonline-p.com` <br> `*.microsoftonline.com` <br> `*.microsoftonline-p.com` <br> `*.msauth.net` <br> `*.msauthimages.net` <br> `*.msecnd.net` <br> `*.msftauth.net` <br> `*.msftauthimages.net` <br> `*.phonefactor.net` <br> `enterpriseregistration.windows.net` <br> `management.azure.com` <br> `policykeyservice.dc.ad.msft.net` <br> `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 443/HTTPS | The connector uses these URLs during the registration process. |
-| `ctldl.windowsupdate.com` | 80/HTTP | The connector uses this URL during the registration process. |
+| `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 80/HTTP | The connector uses this URL during the registration process. |
You can allow connections to `*.msappproxy.net`, `*.servicebus.windows.net`, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
Allow access to the following URLs:
| &ast;.msappproxy.net<br>&ast;.servicebus.windows.net | 443/HTTPS | Communication between the connector and the Application Proxy cloud service | | crl3.digicert.com<br>crl4.digicert.com<br>ocsp.digicert.com<br>crl.microsoft.com<br>oneocsp.microsoft.com<br>ocsp.msocsp.com<br> | 80/HTTP | The connector uses these URLs to verify certificates. | | login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>&ast;.microsoftonline.com<br>&ast;.microsoftonline-p.com<br>&ast;.msauth.net<br>&ast;.msauthimages.net<br>&ast;.msecnd.net<br>&ast;.msftauth.net<br>&ast;.msftauthimages.net<br>&ast;.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctldl.windowsupdate.com | 443/HTTPS | The connector uses these URLs during the registration process. |
-| ctldl.windowsupdate.com | 80/HTTP | The connector uses this URL during the registration process. |
+| ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 80/HTTP | The connector uses this URL during the registration process. |
If your firewall or proxy allows you to configure DNS allow lists, you can allow connections to \*.msappproxy.net and \*.servicebus.windows.net.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 03/09/2022 Last updated : 04/04/2022
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2022
+
+### Updated articles
+
+- [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md)
+- [High availability and load balancing of your Application Proxy connectors and applications](application-proxy-high-availability-load-balancing.md)
++ ## February 2022 ### Updated articles
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
services.Configure<MsalDistributedTokenCacheAdapterOptions>(options =>
options.DisableL1Cache = false; // Or limit the memory (by default, this is 500 MB)
- options.sizeLimit = 1024 * 1024 * 1024, // 1 GB
+ options.L1CacheOptions.SizeLimit = 1024 * 1024 * 1024, // 1 GB
// You can choose if you encrypt or not encrypt the cache options.Encrypt = false;
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 03/10/2022 Last updated : 04/04/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## March 2022
+
+### New articles
+
+- [Secure access control using groups in Azure AD](secure-group-access-control.md)
+
+### Updated articles
+
+- [Authentication flow support in MSAL](msal-authentication-flows.md)
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md)
+- [Configure an app to trust an external identity provider (preview)](workload-identity-federation-create-trust.md)
+- [OAuth 2.0 and OpenID Connect in the Microsoft identity platform](active-directory-v2-protocols.md)
+- [Signing key rollover in the Microsoft identity platform](active-directory-signing-key-rollover.md)
+- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)
+ ## February 2022 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md) - [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md) - [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md)-
-## December 2021
-
-### New articles
--- [Build Zero Trust-ready apps using Microsoft identity platform features and tools](zero-trust-for-developers.md)-- [Quickstart: Sign in users in single-page apps (SPA) using the auth code flow](single-page-app-quickstart.md)-- [Run automated integration tests](test-automate-integration-testing.md)-- [Secure identity in line-of-business application using Zero Trust principles](secure-line-of-business-apps.md)-- [What are workload identities?](workload-identities-overview.md)-
-### Updated articles
--- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Microsoft identity platform developer glossary](developer-glossary.md)-- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
As you configure [cross-tenant access settings](cross-tenant-access-settings-b2b
1. Open PowerShell and run the following script, substituting the file location in the first line with your text file: ```powershell
-$policy = Get-Content ΓÇ£C:\policyobject.txtΓÇ¥ | ConvertTo-Json
+$policy = Get-Content ΓÇ£C:\policyobject.txtΓÇ¥
$maxSize = 1024*25 $size = [System.Text.Encoding]::UTF8.GetByteCount($policy) write-host "Remaining Bytes available in policy object"
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
na Previously updated : 06/18/2020 Last updated : 03/24/2022
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
na Previously updated : 12/23/2021 Last updated : 3/30/2022
Once you have found the access package in the My Access portal, you can submit a
1. Or click **Request access** directly.
+1. You may have to answer questions and provide business justification for your request. If there are questions that you need to answer, type in your responses in the fields.
+ 1. If the **Business justification** box is displayed, type a justification for needing access.
-1. If **Request for specific period?** is enabled, select **Yes** or **No**.
+1. Set the **Request for specific period?** toggle to request access to the access package for a set duration of time:
+
+ 1. If you don't need access for a specific period, set the **Request for specific period?** toggle to **No**.
-1. If necessary, specify the start date and end date.
+ 1. If you need access for a certain time period, set the **Request for specific period?** toggle to **Yes**. Then, specify the start date and end date for access.
- ![My Access portal - Request access](./media/entitlement-management-shared/my-access-request-access.png)
+ ![My Access portal - Request access](./media/entitlement-management-shared/my-access-request-access.png)
1. When finished, click **Submit** to submit your request.
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Granting tenant-wide admin consent requires you to sign in as a user that is aut
To grant tenant-wide admin consent, you need: - An Azure AD user account with one of the following roles:
- - Global Administrator or Privileged Role Administrator, for granting consnet for apps requesting any permission, for any API.
- - Cloud Application Administrator or Application Administrator, for granting consnet for apps requesting any permission for any API, _except_ Azure AD Graph or Microsoft Graph app roles (application permissions).
+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, _except_ Azure AD Graph or Microsoft Graph app roles (application permissions).
- A custom directory role that includes the [permission to grant permissions to applications](../roles/custom-consent-permissions.md), for the permissions required by the application. ## Grant tenant-wide admin consent in Enterprise apps
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 03/09/2022 Last updated : 04/04/2022
reviewer: napuri
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2022
+
+### New articles
+
+- [Overview of admin consent workflow](admin-consent-workflow-overview.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)
+
+### Updated articles
+
+- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
+- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
+- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
+- [Quickstart: View enterprise applications](view-applications-portal.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+- [Review admin consent requests](review-admin-consent-requests.md)
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)
++ ## February 2022 ### New articles
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Create Azure AD Domain Services instance | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
+> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator) and [Groups Administrator](../roles/permissions-reference.md#groups-administrator)|[Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) |
> | Perform all Azure AD Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | > | Read all configuration | Reader on Azure subscription containing AD DS service | |
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/my-staff-configure.md
Once you have configured administrative units, you can apply this scope to your
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) as a User Administrator.
-1. Select **Azure Active Directory** > **User settings** > **User feature previews** > **Manage user feature preview settings**.
+1. Select **Azure Active Directory** > **User settings** > **User feature ** > **Manage user feature settings**.
1. Under **Administrators can access My Staff**, you can choose to enable for all users, selected users, or no user access.
You can view audit logs for actions taken in My Staff in the Azure Active Direct
## Next steps [My Staff user documentation](https://support.microsoft.com/account-billing/manage-front-line-users-with-my-staff-c65b9673-7e1c-4ad6-812b-1a31ce4460bd)
-[Administrative units documentation](administrative-units.md)
+[Administrative units documentation](administrative-units.md)
active-directory F5 Big Ip Oracle Jd Edwards Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-jd-edwards-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle JD Edwards using Azure AD
+description: Learn to implement SHA with header-based Single Sign-On to Oracle JD Edwards using F5ΓÇÖs BIG-IP Easy Button guided configuration
++++++++ Last updated : 03/29/2022+++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JD Edwards using Azure AD
+
+In this article, learn to secure Oracle JD Edwards (JDE) using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure AD provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+This scenario looks at the classic **Oracle JDE application** using **HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of several components:
+
+**Oracle JDE Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-oracle-jde/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP injects Azure AD attributes as headers in request to the application |
+| 6| Application authorizes request and returns payload |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-bigip-deployment-guide)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing Oracle JDE environment
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
+
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-oracle-jde/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-oracle-jde/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-oracle-jde/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+
+4. Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-oracle-jde/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-oracle-jde/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-easy-button-oracle-jde/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-easy-button-oracle-jde/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-oracle-jde/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **JD Edwards Protected by F5 BIG-IP > Add**.
+
+![ Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on MyApps portal
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the JDE application being secured.
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-oracle-jde/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-oracle-jde/user-attributes-claims.png)
+
+You can include additional Azure AD attributes if necessary, but the Oracle JDE scenario only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-oracle-jde/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+
+#### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-easy-button-oracle-jde/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application itself. Using a test PC's localhost DNS is fine for testing.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-oracle-jde/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing node or specify an IP and port for the servers hosting the Oracle JDE application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-easy-button-oracle-jde/application-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the Oracle JDE application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** JDE_SSO_UID
+* **Header Value:** %{session.sso.token.last.username}
+
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-oracle-jde/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
+
+### Session Management
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](/azure/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+
+## Next steps
+
+From a browser, connect to the **Oracle JDE applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](/azure/active-directory/manage-apps/f5-big-ip-header-advanced). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-easy-button-oracle-jde/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+Failure to access a SHA protected application can be due to any number of factors. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see if the logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Paloaltoadmin Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > For more information about the attributes, see the following articles:
- > * [Administrative role profile for Admin UI (adminrole)](https://www.paloaltonetworks.com/documentation/80/pan-os/pan-os/firewall-administration/manage-firewall-administrators/configure-an-admin-role-profile)
+ > * [Administrative role profile for Admin UI (adminrole)](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/firewall-administration/manage-firewall-administrators/configure-an-admin-role-profile)
> * [Device access domain for Admin UI (accessdomain)](https://docs.paloaltonetworks.com/pan-os/8-0/pan-os-web-interface-help/device/device-access-domain.html) 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
You can also:
[cert-manager-issuer]: https://cert-manager.io/docs/concepts/issuer/ [lets-encrypt]: https://letsencrypt.org/ [nginx-ingress]: https://github.com/kubernetes/ingress-nginx
-[helm-install]: https://docs.helm.sh/using-helm/#installing-helm
+[helm-install]: https://helm.sh/docs/helm/helm_install
[ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx <!-- LINKS - internal -->
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
Previously updated : 11/16/2019 Last updated : 04/04/2022
After you specify an error page, the application gateway downloads it from the s
1. Navigate to Application Gateway in the portal and choose an application gateway.
- ![Screenshot shows the Overview page for an application gateway.](media/custom-error/ag-overview.png)
-2. Click **Listeners** and navigate to a particular listener where you want to specify an error page.
+2. Select **Listeners** and navigate to a particular listener where you want to specify an error page.
- ![Application Gateway listeners](media/custom-error/ag-listener.png)
3. Configure a custom error page for a 403 WAF error or a 502 maintenance page at the listener level. > [!NOTE] > Creating global level custom error pages from the Azure portal is currently not supported.
-4. Specify a publicly accessible blob URL for a given error status code and click **Save**. The Application Gateway is now configured with the custom error page.
+4. Under **Error page url**, select **Yes**, and then configure a publicly accessible blob URL for a given error status code. Select **Save**. The Application Gateway is now configured with the custom error page.
- ![Application Gateway error codes](media/custom-error/ag-error-codes.png)
+ ![Screenshot of Application Gateway custom error page.](media/custom-error/ag-error-codes.png)
## Azure PowerShell configuration
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md
Check [Azure Automation Network Configuration](automation-network-configuration.
#### Proxy support
-Proxy support for the DSC agent is available in Windows version 1809 and later. This option is enabled by setting the values for `ProxyURL` and `ProxyCredential` properties in the [metaconfiguration script](automation-dsc-onboarding.md#generate-dsc-metaconfigurations)
+Proxy support for the DSC agent is available in Windows release 1809 and later. This option is enabled by setting the values for `ProxyURL` and `ProxyCredential` properties in the [metaconfiguration script](automation-dsc-onboarding.md#generate-dsc-metaconfigurations)
used to register nodes. >[!NOTE]
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
Previously updated : 11/03/2021 Last updated : 03/30/2022
A computer running Azure CLI that is uploading monitoring metrics or logs to Azu
- `*.oms.opinsights.azure.com` - `*.monitoring.azure.com`
+For example, to upload usage metrics data services will connect to `https://<azureRegion>.monitoring.azure.com/` where `<azureRegion>` is the region where data services is deployed.
+
+Likewise, data services will connect to the log analytics workspace at `https://<subscription_id>.ods.opinsights.azure.com` where `<subscription_id>` represents your Azure subscription.
+ #### Protocol HTTPS
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
To ensure that the privileged init container setting is not reverted to the defa
### Enable High Availability features on installation OSM's control plane components are built with High Availability and Fault Tolerance in mind. This section describes how to enable Horizontal Pod Autoscaling (HPA) and Pod Disruption Budget (PDB) during installation. Read more on the design
-considerations of High Availability on OSM [here](https://openservicemesh.io/docs/guides/ha_scale/high_availability/).
+considerations of High Availability on OSM [here](https://docs.openservicemesh.io/docs/guides/ha_scale/high_availability/).
#### Horizontal Pod Autoscaling (HPA) HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB ```
-Now, you can add the storage output binding to your project.
::: zone-end ::: zone pivot="programming-language-javascript"
azure-maps Tutorial Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md
To query all collections in your dataset:
4. Select **Send**.
-5. The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API](/rest/api/maps/v2/wfs/collection-description). To learn more about any collection, you can select any of the URLs inside the `links` element.
+5. The response body is returned in GeoJSON format and contains all collections in the dataset. For simplicity, the example here only shows the `unit` collection. To see an example that contains all collections, see [WFS Describe Collections API](/rest/api/maps/v2/wfs/get-collection-definition). To learn more about any collection, you can select any of the URLs inside the `links` element.
```json {
azure-monitor Alerts Metric Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-create-templates.md
Previously updated : 2/23/2022 Last updated : 4/4/2022 # Create a metric alert with a Resource Manager template
Save the json below as simplestaticmetricalert.json for the purpose of this walk
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as simplestaticmetricalert.parameters.json and modify it as
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as simpledynamicmetricalert.json for the purpose of this wal
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as simpledynamicmetricalert.parameters.json and modify it as
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as advancedstaticmetricalert.json for the purpose of this wa
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as advancedstaticmetricalert.parameters.json for
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as multidimensionalstaticmetricalert.json for the purpose of
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as multidimensionalstaticmetricalert.parameters.j
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as advanceddynamicmetricalert.json for the purpose of this w
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as advanceddynamicmetricalert.parameters.json for
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as customstaticmetricalert.json for the purpose of this walk
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as customstaticmetricalert.parameters.json for th
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as all-vms-in-resource-group-static.json for the purpose of
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as all-vms-in-resource-group-static.parameters.js
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as all-vms-in-resource-group-dynamic.json for the purpose of
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as all-vms-in-resource-group-dynamic.parameters.j
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as all-vms-in-subscription-static.json for the purpose of th
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as all-vms-in-subscription-static.parameters.json
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as all-vms-in-subscription-dynamic.json for the purpose of t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as all-vms-in-subscription-dynamic.parameters.jso
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as list-of-vms-static.json for the purpose of this walk-thro
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as list-of-vms-static.parameters.json for the pur
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as list-of-vms-dynamic.json for the purpose of this walk-thr
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save and modify the json below as list-of-vms-dynamic.parameters.json for the pu
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "alertName": {
Save the json below as availabilityalert.json for the purpose of this walkthroug
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "appName": {
Save the json below as availabilityalert.parameters.json and modify it as requir
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "appName": {
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Last updated 04/22/2020
>The minimum agent version supported for scraping Prometheus metrics is ciprod07092019 or later, and the agent version supported for writing configuration and agent errors in the `KubeMonAgentEvents` table is ciprod10112019. For Azure Red Hat OpenShift and Red Hat OpenShift v4, agent version ciprod04162020 or higher. > >For more information about the agent versions and what's included in each release, see [agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
->To verify your agent version, from the **Node** tab select a node, and in the properties pane note value of the **Agent Image Tag** property.
+>To verify your agent version, click on **Insights** Tab of the resource, from the **Nodes** tab select a node, and in the properties pane note value of the **Agent Image Tag** property.
Scraping of Prometheus metrics is supported with Kubernetes clusters hosted on:
azure-monitor Surface Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/surface-hubs.md
Use the following information to install and configure the solution. In order to
* A [Log Analytics subscription](https://azure.microsoft.com/pricing/details/log-analytics/) level that will support the number of devices you want to monitor. Log Analytics pricing varies depending on how many devices are enrolled, and how much data it processes. You'll want to take this into consideration when planning your Surface Hub rollout.
-Next, you will either add an existing Log Analytics workspace or create a new one. Detailed instructions for using either method is at [Create a Log Analytics workspace in the Azure portal](../logs/quick-create-workspace.md). Once the Log Analytics workspace is configured, there are two ways to enroll your Surface Hub devices:
+The Surface Hub solution is offered as an Azure Marketplace application which is linked to a new or existing Log Analytics workspace within your Azure subscription. Detailed instructions for using either method is at [Create a Log Analytics workspace in the Azure portal](../logs/quick-create-workspace.md).
+
+To configure the Surface Hub solution, follow these steps:
+
+1. Go to the [Surface Hub page in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.SurfaceHubOMS?tab=Overview). You might need to login to your Azure subscription to access this.
+2. Select **Get it now**.
+3. Choose an existing or configure a new Log Analytics Workspace.
+4. After your workspace is configured and selected, select **Create**. You'll receive a notification when the solution has been successfully created.
+
+Once the Log Analytics workspace is configured and the solution created, there are two ways to enroll your Surface Hub devices:
* Automatically through Intune * Manually through **Settings** on your Surface Hub device.
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
The following table summarizes the differences between the plans.
| Category | Analytics Logs | Basic Logs | |:|:|:| | Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No additional cost. Full query language. | Additional cost. Subset of query language. |
+| Log queries | No additional cost. Full query capabilities. | Additional cost. [Subset of query capabilities](basic-logs-query.md#limitations). |
| Retention | Configure retention from 30 days to 730 days. | Retention fixed at 8 days. | | Alerts | Supported. | Not supported. |
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Title: Using customer-managed storage accounts in Azure Monitor Log Analytics description: Use your own storage account for Log Analytics scenarios -- Previously updated : 09/03/2020+++ Last updated : 04/04/2022 # Using customer-managed storage accounts in Azure Monitor Log Analytics
Storage accounts are charged by the volume of stored data, the type of the stora
## Next steps - Learn about [using Azure Private Link to securely connect networks to Azure Monitor](private-link-security.md)-- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
+- Learn about [Azure Monitor customer-managed keys](../logs/customer-managed-keys.md)
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
If you receive a message that the virtual machine needs to be onboarded after yo
### Is the operating system supported? If the operating system is not in the list of [supported operating systems](vminsights-enable-overview.md#supported-operating-systems) then the extension will fail to install and you will see this message that we are waiting for data to arrive.
+> [!IMPORTANT]
+> Post April 11th 2022, if you are not seeing your Virtual Machine in the VM insights solution, this might due to running an older version of the Dependency Agent. See more details in the blog post: https://techcommunity.microsoft.com/t5/azure-monitor-status/potential-breaking-changes-for-vm-insights-linux-customers/ba-p/3271989 . Not applicable for Windows machines and before April 11th 2022.
+ ### Did the extension install properly? If you still see a message that the virtual machine needs to be onboarded, it may mean that one or both of the extensions failed to install correctly. Check the **Extensions** page for your virtual machine in the Azure portal to verify that the following extensions are listed.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 03/07/2022 Last updated : 04/04/2022 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## March, 2022
+### Agents
+
+**Updated articles**
+
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)
+- [Migrate to Azure Monitor agent from Log Analytics agent](agents/azure-monitor-agent-migration.md)
+
+### Alerts
+
+**Updated articles**
+
+- [Create a classic metric alert rule with a Resource Manager template](alerts/alerts-enable-template.md)
+- [Overview of alerts in Microsoft Azure](alerts/alerts-overview.md)
+- [Alert processing rules](alerts/alerts-action-rules.md)
+
+### Application Insights
+
+**New articles**
+
+- [Error retrieving data message on Application Insights portal](app/troubleshoot-portal-connectivity.md)
+- [Troubleshooting Azure Application Insights auto-instrumentation](app/auto-instrumentation-troubleshoot.md)
+
+**Updated articles**
+
+- [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)
+- [Application Insights for ASP.NET Core applications](app/asp-net-core.md)
+- [Application Insights for web pages](app/javascript.md)
+- [Application Map: Triage Distributed Applications](app/app-map.md)
+- [Configure Application Insights for your ASP.NET website](app/asp-net.md)
+- [Export telemetry from Application Insights](app/export-telemetry.md)
+- [Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)
+- [React plugin for Application Insights JavaScript SDK](app/javascript-react-plugin.md)
+- [Sampling in Application Insights](app/sampling.md)
+- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors.md)
+- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)
+- [Unified cross-component transaction diagnostics](app/transaction-diagnostics.md)
+- [Visualizations for Application Change Analysis (preview)](app/change-analysis-visualizations.md)
+
+### Containers
+
+**Updated articles**
+
+- [How to create log alerts from Container insights](containers/container-insights-log-alerts.md)
+
+### Essentials
+
+**New articles**
+
+- [Activity logs insights (Preview)](essentials/activity-logs-insights.md)
+
+**Updated articles**
+
+- [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](essentials/diagnostic-settings.md)
+- [Azure Monitoring REST API walkthrough](essentials/rest-api-walkthrough.md)
++
+### Logs
+
+**New articles**
+
+- [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](logs/custom-logs-migrate.md)
+
+**Updated articles**
+
+- [Archive data from Log Analytics workspace to Azure storage using Logic App](logs/logs-export-logic-app.md)
+- [Azure Monitor Logs Dedicated Clusters](logs/logs-dedicated-clusters.md)
+- [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md)
+- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](logs/data-retention-archive.md)
+- [Log Analytics Workspace Insights](logs/log-analytics-workspace-insights-overview.md)
+- [Move a Log Analytics workspace to different subscription or resource group](logs/move-workspace.md)
+- [Query Basic Logs in Azure Monitor (Preview)](logs/basic-logs-query.md)
+- [Restore logs in Azure Monitor (preview)](logs/restore.md)
+- [Search jobs in Azure Monitor (preview)](logs/search-jobs.md)
+
+### Virtual Machines
+
+**Updated articles**
+
+- [Monitor virtual machines with Azure Monitor: Alerts](vm/monitor-virtual-machine-alerts.md)
++ ## February, 2022 ### General
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/02/2022 Last updated : 04/04/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
-> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
+> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't start or end with hyphen. |
+> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. |
> | servers / administrators | server | | Must be `ActiveDirectory`. | > | servers / databases | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. | > | servers / databases / syncGroups | database | 1-150 | Alphanumerics, hyphens, and underscores. | > | servers / elasticPools | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. |
-> | servers | failoverGroups | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
+> | servers / failoverGroups | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. |
> | servers / firewallRules | server | 1-128 | Can't use:<br>`<>*%&:;\/?` or control characters<br><br>Can't end with period. | > | servers / keys | server | | Must be in format:<br>`VaultName_KeyName_KeyVersion`. |
azure-signalr Signalr Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-csharp.md
Title: "Azure SignalR Service serverless quickstart - C#"
-description: "A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using C#."
+description: "A quickstart for using Azure SignalR Service and Azure Functions to create an app showing GitHub star count using C#."
ms.devlang: csharp Previously updated : 06/09/2021 Last updated : 03/30/2022
-# Quickstart: Create an App showing GitHub star count with Azure Functions and SignalR Service via C#
+# Quickstart: Create an app showing GitHub star count with Azure Functions and SignalR Service via C#
-Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with C# to broadcast messages to clients.
+In this article, you'll learn how to use SignalR Service and Azure Functions to build a serverless application with C# to broadcast messages to clients.
> [!NOTE]
-> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/csharp)
+> You can get the code mentioned in this article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/csharp).
## Prerequisites
-If you don't already have Visual Studio Code installed, you can download and use it for free(https://code.visualstudio.com/Download).
+The following prerequisites are needed for this quickstart:
-You may also run this tutorial on the command line (macOS, Windows, or Linux) using the [Azure Functions Core Tools)](../azure-functions/functions-run-local.md?tabs=windows%2Ccsharp%2Cbash#v2). Also the [.NET Core SDK](https://dotnet.microsoft.com/download), and your favorite code editor.
+- Visual Studio Code, or other code editor. If you don't already have Visual Studio Code installed, [download Visual Studio Code here](https://code.visualstudio.com/Download).
+- An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/dotnet) before you begin.
+- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?tabs=windows%2Ccsharp%2Cbash#v2)
+- [.NET Core SDK](https://dotnet.microsoft.com/download)
-If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/dotnet) before you begin.
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
-
-## Log in to Azure and create SignalR Service instance
-
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
+## Create an Azure SignalR Service instance
[!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
- ## Setup and run the Azure Function locally
-1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
+You'll need the Azure Functions Core Tools for this step.
+
+1. Create an empty directory and change to the directory with the command line.
+1. Initialize a new project.
```bash # Initialize a function project
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService ```
-2. After you initialize a project. Create a new file with name *Function.cs*. Add the following code to *Function.cs*.
+1. Using your code editor, create a new file with the name *Function.cs*. Add the following code to *Function.cs*:
```csharp using System;
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} } ```
- These codes have three functions. The `Index` is used to get a website as client. The `Negotiate` is used for client to get access token. The `Broadcast` is periodically
- get star count from GitHub and broadcast messages to all clients.
-3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `GetHomePage` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory under project root folder. And copy the following content.
+ The code in *Function.cs* has three functions:
+ - `GetHomePage` is used to get a website as client.
+ - `Negotiate` is used by the client to get an access token.
+ - `Broadcast` is periodically called to get the star count from GitHub and then broadcast messages to all clients.
+
+1. The client interface for this sample is a web page. We render the web page using the `GetHomePage` function by reading HTML content from file *content/https://docsupdatetracker.net/index.html*. Now let's create this *https://docsupdatetracker.net/index.html* under the `content` subdirectory with the following content:
+ ```html <html>
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
</html> ```
-4. Update your `*.csproj` to make the content page in build output folder.
+1. Update your `*.csproj` to make the content page in the build output folder.
```html <ItemGroup>
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
</ItemGroup> ```
-5. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+1. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+ 1. Confirm the SignalR Service instance was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
- 2. Select **Keys** to view the connection strings for the SignalR Service instance.
+ 1. Select **Keys** to view the connection strings for the SignalR Service instance.
![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
- 3. Copy the primary connection string. And execute the command below.
+ 1. Copy the primary connection string, and then run the following command:
```bash func settings add AzureSignalRConnectionString "<signalr-connection-string>" ```
-6. Run the Azure Function in local:
+1. Run the Azure function locally:
```bash func start ```
- After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or unstar in the GitHub, you will get a star count refreshing every few seconds.
+ After the Azure function is running locally, open `http://localhost:7071/api/index` and you can see the current star count. If you star or unstar in the GitHub, you'll get a star count refreshing every few seconds.
> [!NOTE]
- > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
- > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp)
+ > SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running locally.
+ > If you got the error `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
-Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+In this quickstart, you built and ran a real-time serverless application locally. Next, learn more about bi-directional communication between clients and Azure Functions with Azure SignalR Service.
> [!div class="nextstepaction"] > [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Once the maintenance window selection is made and service configuration complete
## Advance notifications
-Maintenance notifications can be configured to alert you of upcoming planned maintenance events for your Azure SQL Database. The alerts arrive 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
+Maintenance notifications can be configured to alert you of upcoming planned maintenance events for your Azure SQL Database and Azure SQL Managed Instance. The alerts arrive 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
## Feature availability
For the full reference of the sample queries and how to use them across tools li
* [Maintenance window FAQ](maintenance-window-faq.yml) * [Azure SQL Database](sql-database-paas-overview.md)
-* [SQL managed instance](../managed-instance/sql-managed-instance-paas-overview.md)
+* [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md)
* [Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance](planned-maintenance.md)
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/log-replay-service-migrate.md
The SAS authentication is generated with the time validity that you specified. Y
:::image type="content" source="./media/log-replay-service-migrate/lrs-generated-uri-token.png" alt-text="Screenshot that shows an example of the U R I version of an S A S token."::: > [!NOTE]
- > Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
+ > Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
### Copy parameters from the SAS token
Consider the following limitations of LRS:
- System-managed software patches are blocked for 36 hours once the LRS has been started. After this time window expires, the next software maintenance update stops LRS. You will need to restart the LRS migration from the beginning. - LRS requires databases on SQL Server to be backed up with the `CHECKSUM` option enabled. - The SAS token that LRS uses must be generated for the entire Azure Blob Storage container, and it must have **Read** and **List** permissions only. For example, if you grant **Read**, **List** and **Write** permissions, LRS will not be able to start because of the extra **Write** permission.-- Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy.md) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
+- Using SAS tokens created with permissions set through defining a [stored access policy](/rest/api/storageservices/define-stored-access-policy) is not supported at this time. Follow the instructions in this article to manually specify **Read** and **List** permissions for the SAS token.
- Backup files containing % and $ characters in the file name cannot be consumed by LRS. Consider renaming such file names. - Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure. Nested folders inside individual database folders are not supported. - LRS must be started separately for each database pointing to the full URI path containing an individual database folder.
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 03/01/2022 Last updated : 04/04/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
Video Analyzer for Media enables you to include speakers' characteristic based on a closed captioning file that you choose to download. To include the speakersΓÇÖ attributes, select Downloads -> Closed Captions -> choose the closed captioning downloadable file format (SRT, VTT, TTML, TXT, or CSV) and check **Include speakers** checkbox.
+### Improvements to the widget offering
+
+The following improvements were made:
+
+* Video Analyzer for Media widgets support more than 1 locale in a widget's parameter.
+* The Insights widgets support initial search parameters and multiple sorting options.
+* The Insights widgets also include a confirmation step before deleting a face to avoid mistakes.
+* The widget customization now supports width as strings (for example 100%, 100vw).
+ ## February 2022 ### Public preview of Video Analyzer for Media account management based on ARM in Government cloud
azure-video-analyzer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/upload-index-videos.md
public async Task Sample()
client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key"); // Upload a video
- var content = new MultipartFormDataContent();
+ MultipartFormDataContent content = null;
Console.WriteLine("Uploading...");
+
// Get the video from URL var videoUrl = "VIDEO_URL"; // Replace with the video URL // As an alternative to specifying video URL, you can upload a file. // Remove the videoUrl parameter from the query parameters below and add the following lines:
- //FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- //byte[] buffer =new byte[video.Length];
+ //content = new MultipartFormDataContent();
+ //FileStream video = File.OpenRead(@"c:\videos\democratic3.mp4");
+ //byte[] buffer = new byte[video.Length];
//video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer));
+ //content.Add(new ByteArrayContent(buffer), "MyVideo", "MyVideo");
queryParams = CreateQueryString( new Dictionary<string, string>()
namespace VideoIndexerArm
var client = new HttpClient(handler); // Upload a video
- var content = new MultipartFormDataContent();
+ MultipartFormDataContent content = null;
Console.WriteLine("Uploading...");
- // Get the video from URL
// As an alternative to specifying video URL, you can upload a file. // Remove the videoUrl parameter from the query parameters below and add the following lines:
- // FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- // byte[] buffer =new byte[video.Length];
- // video.Read(buffer, 0, buffer.Length);
- // content.Add(new ByteArrayContent(buffer));
+ //content = new MultipartFormDataContent();
+ //FileStream video = File.OpenRead(@"c:\videos\democratic3.mp4");
+ //byte[] buffer = new byte[video.Length];
+ //video.Read(buffer, 0, buffer.Length);
+ //content.Add(new ByteArrayContent(buffer), "MyVideo", "MyVideo");
var queryParams = CreateQueryString( new Dictionary<string, string>()
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-api-management.md
The external deployment diagram shows the entire process and the actors involved
The traffic flow goes through the API Management instance, which abstracts the backend services, plugged into the Hub virtual network. The ExpressRoute Gateway routes the traffic to the ExpressRoute Global Reach channel and reaches an NSX Load Balancer distributing the incoming traffic to the different backend service instances.
-API Management has an Azure Public API, and activating Azure DDOS Protection Service is recommended.
+API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
:::image type="content" source="media/api-management/api-management-external-deployment.png" alt-text="Diagram showing an external API Management deployment for Azure VMware Solution" border="false":::
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Last updated 07/29/2021
# Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter and restricted administrator rights for NSX-T Manager.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter Server and restricted administrator rights for NSX-T Manager.
-## vCenter access and identity
+## vCenter Server access and identity
[!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)] > [!IMPORTANT]
-> Azure VMware Solution offers custom roles on vCenter but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
+> Azure VMware Solution offers custom roles on vCenter Server but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter Server](#create-custom-roles-on-vcenter-server) section later in this article.
### View the vCenter privileges
You can view the privileges granted to the Azure VMware Solution CloudAdmin role
:::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="Screenshot showing the roles and privileges for CloudAdmin in the vSphere Client.":::
-The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more information, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
+The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter Server. For more information, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
| Privilege | Description | | | -- |
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
| **vService** | Create dependency<br />Destroy dependency<br />Reconfigure dependency configuration<br />Update dependency | | **vSphere tagging** | Assign and unassign vSphere tag<br />Create vSphere tag<br />Create vSphere tag category<br />Delete vSphere tag<br />Delete vSphere tag category<br />Edit vSphere tag<br />Edit vSphere tag category<br />Modify UsedBy field for category<br />Modify UsedBy field for tag |
-### Create custom roles on vCenter
+### Create custom roles on vCenter Server
Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
You'll use the CloudAdmin role to create, modify, or delete custom roles with pr
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles. #### Create a custom role
-1. Sign in to vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
+1. Sign in to vCenter Server with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
1. Navigate to the **Roles** configuration section and select **Menu** > **Administration** > **Access Control** > **Roles**.
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
If youΓÇÖre using BGP AS-Path Prepend to dedicate a circuit from Azure towards o
## Management VMs and default routes from on-premises > [!IMPORTANT]
-> Azure Vmware Solution Management VMs don't honor a default route from On-Premises.
+> Azure VMware Solution Management VMs don't honor a default route from On-Premises.
-If youΓÇÖre routing back to your on-premises networks using only a default route advertised towards Azure, the vCenter and NSX manager VMs won't honor that route.
+If youΓÇÖre routing back to your on-premises networks using only a default route advertised towards Azure, the vCenter Server and NSX Manager VMs won't honor that route.
**Solution**
-To reach vCenter and NSX manager, more specific routes from on-prem need to be provided to allow traffic to have a return path route to those networks.
+To reach vCenter Server and NSX Manager, more specific routes from on-prem need to be provided to allow traffic to have a return path route to those networks.
## Next steps
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
This article covers the key concepts that establish networking and interconnecti
## Azure VMware Solution private cloud use cases The use cases for Azure VMware Solution private clouds include:-- New VMware VM workloads in the cloud
+- New VMware vSphere VM workloads in the cloud
- VM workload bursting to the cloud (on-premises to Azure VMware Solution only) - VM workload migration to the cloud (on-premises to Azure VMware Solution only) - Disaster recovery (Azure VMware Solution to Azure VMware Solution or on-premises to Azure VMware Solution)
You can interconnect your Azure virtual network with the Azure VMware Solution p
The diagram below shows the basic network interconnectivity established at the time of a private cloud deployment. It shows the logical networking between a virtual network in Azure and a private cloud. This connectivity is established via a backend ExpressRoute that is part of the Azure VMware Solution service. The interconnectivity fulfills the following primary use cases: -- Inbound access to vCenter server and NSX-T manager that is accessible from VMs in your Azure subscription.
+- Inbound access to vCenter Server and NSX-T Manager that is accessible from VMs in your Azure subscription.
- Outbound access from VMs on the private cloud to Azure services. - Inbound access of workloads running in the private cloud.
In the fully interconnected scenario, you can access the Azure VMware Solution f
The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases: -- Hot/Cold vCenter vMotion between on-premises and Azure VMware Solution.
+- Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution.
- On-Premises to Azure VMware Solution private cloud management access. :::image type="content" source="media/concepts/adjacency-overview-drawing-double.png" alt-text="Diagram showing the virtual network and on-premises to private cloud interconnectivity." border="false":::
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Last updated 09/17/2021
# Run command in Azure VMware Solution
-In Azure VMware Solution, vCenter has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter [privileges](concepts-identity.md#view-the-vcenter-privileges) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
+In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#view-the-vcenter-privileges) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
Azure VMware Solution supports the following operations:
Now that you've learned about the Run command concepts, you can use the Run comm
- [Configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned a vSAN storage policy. You can assign a vSAN storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating. -- [Configure external identity source for vCenter (Run command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role.
+- [Configure external identity source for vCenter (Run command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter Server, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role.
-- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
+- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-security-recommendations.md
The following are network-related security recommendations for Azure VMware Solu
| **Recommendation** | **Comments** | | :-- | :-- |
-| Only allow trusted networks | Only allow access to your environments over ExpressRoute or other secured networks. Avoid exposing your management services like vCenter, for example, on the internet. |
+| Only allow trusted networks | Only allow access to your environments over ExpressRoute or other secured networks. Avoid exposing your management services like vCenter Server, for example, on the internet. |
| Use Azure Firewall Premium | If you must expose management services on the internet, use [Azure Firewall Premium](../firewall/premium-migrate.md) with both IDPS Alert and Deny mode along with TLS inspection for proactive threat detection. | | Deploy and configure Network Security Groups on VNET | Ensure any VNET deployed has [Network Security Groups](../virtual-network/network-security-groups-overview.md) configured to control ingress and egress to your environment. | | Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
In your data center, you can connect or pair the VMware HCX Cloud Manager in Azu
> [!IMPORTANT] > Although the VMware Configuration Maximum tool describes site pairs maximum to be 25 between the on-premises HCX Connector and HCX Cloud Manager, licensing limits this to three for HCX Advanced and 10 for HCX Enterprise Edition.
-1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
+1. Sign in to your on-premises vCenter Server, and under **Home**, select **HCX**.
1. Under **Infrastructure**, select **Site Pairing** and select the **Connect To Remote Site** option (in the middle of the screen).
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Use the following steps to perform a manual upgrade for Arc appliance virtual ma
1. Power off the VM. 1. Delete the VM. 1. Delete the download template corresponding to the VM.
-1. Delete the appliance ARM resource.
+1. Delete the resource bridge ARM resource.
1. Get the previous script `Config_avs` file and add the following configuration item: 1. `"register":false` 1. Download the latest version of the Azure VMware Solution onboarding script.
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
The diagram shows the deployment workflow of Azure VMware Solution.
:::image type="content" source="media/deploy-azure-vmware-solution-workflow.png" alt-text="Diagram showing the Azure VMware Solution deployment workflow." lightbox="media/deploy-azure-vmware-solution-workflow.png" border="false":::
-In this how-to, you'll':
+In this how-to, you'll:
> [!div class="checklist"] > * Register the resource provider and create a private cloud
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Last updated 03/29/2022
# Install and activate VMware HCX in Azure VMware Solution
-VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed in Azure VMware Solution. Instead, you'll install it through the Azure portal as an add-on. You'll still download the HCX Connector OVA and deploy the virtual appliance on your on-premises vCenter.
+VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed in Azure VMware Solution. Instead, you'll install it through the Azure portal as an add-on. You'll still download the HCX Connector OVA and deploy the virtual appliance on your on-premises vCenter Server.
Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud). The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
-Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) aren't in use, and site pairings are three or fewer.
+Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use, and site pairings are three or fewer.
>[!TIP] >You can also [uninstall HCX Advanced](#uninstall-hcx-advanced) through the portal. When you uninstall HCX Advanced, make sure you don't have any active migrations in progress. Removing HCX Advanced returns the resources to your private cloud occupied by the HCX virtual appliances.
After you're finished, follow the recommended next steps at the end to continue
## Download and deploy the VMware HCX Connector OVA
-In this step, you'll download the VMware HCX Connector OVA file, and then you'll deploy the VMware HCX Connector to your on-premises vCenter.
+In this step, you'll download the VMware HCX Connector OVA file, and then you'll deploy the VMware HCX Connector to your on-premises vCenter Server.
1. Open a browser window, sign in to the Azure VMware Solution HCX Manager on `https://x.x.x.9` port 443 with the **cloudadmin\@vsphere.local** user credentials 1. Under **Administration** > **System Updates**, select **Request Download Link**. If the box is greyed, wait a few seconds for it to generate a link.
-1. Either download or receive a link for the VMware HCX Connector OVA file you deploy on your local vCenter.
+1. Either download or receive a link for the VMware HCX Connector OVA file you deploy on your local vCenter Server.
-1. In your on-premises vCenter, select an [OVF template](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) to deploy the VMware HCX Connector to your on-premises vCenter.
+1. In your on-premises vCenter Server, select an [OVF template](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) to deploy the VMware HCX Connector to your on-premises vSphere cluster.
1. Navigate to and select the OVA file that you downloaded and then select **Open**.
After deploying the VMware HCX Connector OVA on-premises and starting the applia
1. In **Connect your vCenter**, provide the FQDN or IP address of your vCenter server and the appropriate credentials, and then select **Continue**. >[!TIP]
- >The vCenter server is where you deployed the VMware HCX Connector in your datacenter.
+ >The vCenter Server is where you deployed the VMware HCX Connector in your datacenter.
1. In **Configure SSO/PSC**, provide your Platform Services Controller's FQDN or IP address, and select **Continue**. >[!NOTE]
- >Typically, it's the same as your vCenter FQDN or IP address.
+ >Typically, it's the same as your vCenter Server FQDN or IP address.
1. Verify that the information entered is correct and select **Restart**. >[!NOTE] >You'll experience a delay after restarting before being prompted for the next step.
-After the services restart, you'll see vCenter showing as green on the screen that appears. Both vCenter and SSO must have the appropriate configuration parameters, which should be the same as the previous screen.
+After the services restart, you'll see vCenter Server showing as green on the screen that appears. Both vCenter Server and SSO must have the appropriate configuration parameters, which should be the same as the previous screen.
:::image type="content" source="media/tutorial-vmware-hcx/activation-done.png" alt-text="Screenshot of the dashboard with green vCenter status." lightbox="media/tutorial-vmware-hcx/activation-done.png":::
You can uninstall HCX Advanced through the portal, which removes the existing pa
1. Enter **yes** to confirm the uninstall.
-At this point, HCX Advanced no longer has the vCenter plugin, and if needed, you can reinstall it at any time.
+At this point, HCX Advanced no longer has the vCenter Server plugin, and if needed, you can reinstall it at any time.
## Next steps
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Last updated 09/27/2021
Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
-In this how-to, you'll':
+In this how-to, you'll:
> [!div class="checklist"] > * Identify the Azure subscription, resource group, region, and resource name
After the support team receives your request for a host quota, it takes up to fi
## Define the IP address segment for private cloud management
-Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including vCenter, VMware HCX, NSX-T, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
+Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
:::image type="content" source="media/pre-deployment/management-vmotion-vsan-network-ip-diagram.png" alt-text="Diagram showing Azure VMware Solution management IP address segments." border="false":::
Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. T
## Define the IP address segment for VM workloads
-Like with any VMware environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there is often a combination of L2 extended segments from on-premises and local NSX-T network segments.
+Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there is often a combination of L2 extended segments from on-premises and local NSX-T network segments.
For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circ
## Define VMware HCX network segments
-VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware workloads to Azure VMware Solution and other connected sites through various migration types.
+VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary. -- **Management network:** When deploying VMware HCX on-premises, you'll need to identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
+- **Management network:** When deploying VMware HCX on-premises, you'll need to identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
>[!NOTE]
- >Preparing for large environments, instead of using the management network used for the on-premises VMware cluster, create a new /26 network and present that network as a port group to your on-premises VMware cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds.
+ >Preparing for large environments, instead of using the management network used for the on-premises VMware vSphere cluster, create a new /26 network and present that network as a port group to your on-premises VMware vSphere cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds.
- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network which youΓÇÖll use for the Management network. -- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
+- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
You must expose the vMotion network on a distributed virtual switch or vSwitch0. If it's not, modify the environment to accommodate. >[!NOTE]
- >Many VMware environments use non-routed network segments for vMotion, which poses no problems.
+ >Many VMware vSphere environments use non-routed network segments for vMotion, which poses no problems.
- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network as you are using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Last updated 08/13/2021
# Tutorial: Access an Azure VMware Solution private cloud
-Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter. Instead, you'll need to connect to the Azure VMware Solution vCenter instance through a jump box.
+Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter Server. Instead, you'll need to connect to the Azure VMware Solution vCenter Server instance through a jump box.
-In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter and the NSX Manager.
+In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter Server. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter Server and the NSX Manager.
In this tutorial, you learn how to: > [!div class="checklist"] > * Create a Windows VM to access the Azure VMware Solution vCenter
-> * Sign into vCenter from this VM
+> * Sign into vCenter Server from this VM
## Create a new Windows virtual machine
In this tutorial, you learn how to:
## Connect to the local vCenter of your private cloud
-1. From the jump box, sign in to vSphere Client with VMware vCenter SSO using a cloud admin username and verify that the user interface displays successfully.
+1. From the jump box, sign in to vSphere Client with VMware vCenter Server SSO using a cloud admin username and verify that the user interface displays successfully.
1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
- The URLs and user credentials for private cloud vCenter and NSX-T Manager display.
+ The URLs and user credentials for private cloud vCenter Server and NSX-T Manager display.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter Server and NSX Manager URLs and credentials." lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
1. Navigate to the VM you created in the preceding step and connect to the virtual machine. If you need help with connecting to the VM, see [connect to a virtual machine](../virtual-machines/windows/connect-logon.md#connect-to-the-virtual-machine) for details.
-1. In the Windows VM, open a browser and navigate to the vCenter and NSX-T Manager URLs in two tabs.
+1. In the Windows VM, open a browser and navigate to the vCenter Server and NSX-T Manager URLs in two tabs.
-1. In the vCenter tab, enter the `cloudadmin@vsphere.local` user credentials from the previous step.
+1. In the vSphere Client tab, enter the `cloudadmin@vsphere.local` user credentials from the previous step.
:::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true":::
In this tutorial, you learn how to:
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Create a Windows VM to use to connect to vCenter
-> * Login to vCenter from your VM
+> * Create a Windows VM to use to connect to vCenter Server
+> * Login to vCenter Server from your VM
Continue to the next tutorial to learn how to create a virtual network to set up local management for your private cloud clusters.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
Last updated 07/30/2021
# Tutorial: Configure networking for your VMware private cloud in Azure
-An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter, you'll need to do additional steps to integrate with your on-premises environment. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
+An Azure VMware Solution private cloud requires an Azure Virtual Network. Because Azure VMware Solution doesn't support your on-premises vCenter Server, you'll need to do additional steps to integrate with your on-premises environment. Setting up an ExpressRoute circuit and a virtual network gateway is also required.
[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
In this tutorial, you learned how to:
> * Connect your ExpressRoute circuit to the gateway
-Continue to the next tutorial to learn how to create the NSX-T network segments used for VMs in vCenter.
+Continue to the next tutorial to learn how to create the NSX-T network segments used for VMs in vCenter Server.
> [!div class="nextstepaction"] > [Create an NSX-T network segment](./tutorial-nsx-t-network-segment.md)
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-create-private-cloud.md
Last updated 09/29/2021
The Azure VMware Solution private gives you the ability to deploy a vSphere cluster in Azure. For each private cloud created, there's one vSAN cluster by default. You can add, delete, and scale clusters. The minimum number of hosts per cluster is three. More hosts can be added one at a time, up to a maximum of 16 hosts per cluster. The maximum number of clusters per private cloud is four. The initial deployment of Azure VMware Solution has three hosts.
-You use vSphere and NSX-T Manager to manage most other aspects of cluster configuration or operation. All local storage of each host in a cluster is under the control of vSAN.
+You use vCenter Server and NSX-T Manager to manage most other aspects of cluster configuration or operation. All local storage of each host in a cluster is under the control of vSAN.
>[!TIP] >You can always extend the cluster and add additional clusters later if you need to go beyond the initial deployment number.
-Because Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter at launch, you'll need to do additional steps for the configuration. This tutorial covers these steps and related prerequisites.
+Because Azure VMware Solution doesn't allow you to manage your private cloud with your cloud vCenter Server at launch, you'll need to do additional steps for the configuration. This tutorial covers these steps and related prerequisites.
In this tutorial, you'll learn how to:
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-delete-private-cloud.md
When you delete a private cloud, all VMs, their data, clusters, and network addr
## Prerequisites
-If you require the VMs and their data later, make sure to back up the data before you delete the private cloud. Unfortunately, there's no way to recover the VMs and their data.
+If you require the VMs and their data later, make sure to backup the data before you delete the private cloud. Unfortunately, there's no way to recover the VMs and their data.
## Delete the private cloud
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
After you're finished, follow the recommended next steps at the end to continue
## Prerequisites -- Review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-cli.md#enable-connectivity-between-expressroute-circuits-in-different-azure-subscriptions).
+- Review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-portal.md).
- A separate, functioning ExpressRoute circuit for connecting on-premises environments to Azure, which is _circuit 1_ for peering.
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-network-checklist.md
When you create a virtual network connection in your subscription, the ExpressRo
> [!NOTE] > The ExpressRoute circuit is not part of a private cloud deployment. The on-premises ExpressRoute circuit is beyond the scope of this document. If you require on-premises connectivity to your private cloud, you can use one of your existing ExpressRoute circuits or purchase one in the Azure portal.
-When deploying a private cloud, you receive IP addresses for vCenter and NSX-T Manager. To access those management interfaces, you'll need to create more resources in your subscription's virtual network. You can find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
+When deploying a private cloud, you receive IP addresses for vCenter Server and NSX-T Manager. To access those management interfaces, you'll need to create more resources in your subscription's virtual network. You can find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
-The private cloud logical networking comes with pre-provisioned NSX-T. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and North-South connectivity to the internet and Azure services.
+The private cloud logical networking comes with pre-provisioned NSX-T Data Center configuration. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T logical networking components provide East-West connectivity between workloads and North-South connectivity to the internet and Azure services.
>[!IMPORTANT] >[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
The subnets:
| Source | Destination | Protocol | Port | Description | | | -- | :: | ::| |
-| Private Cloud DNS server | On-Premises DNS Server | UDP | 53 | DNS Client - Forward requests from PC vCenter for any on-premises DNS queries (check DNS section below) |
+| Private Cloud DNS server | On-Premises DNS Server | UDP | 53 | DNS Client - Forward requests from Private Cloud vCenter Server for any on-premises DNS queries (check DNS section below) |
| On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (check DNS section below) | | On-premises network | Private Cloud vCenter server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. | | Private Cloud management network | On-premises Active Directory | TCP | 389/636 | These ports are open to allow communications for Azure VMware Solutions vCenter to communicate to any on-premises Active Directory/LDAP server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. |
-| Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 3269 is recommended for security purposes. |
-| On-premises network | Private Cloud vCenter server | TCP(HTTPS) | 443 | This port allows you to access vCenter from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
+| Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter Server. Port 3269 is recommended for security purposes. |
+| On-premises network | Private Cloud vCenter Server | TCP(HTTPS) | 443 | This port allows you to access vCenter Server from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
| On-premises network | HCX Manager | TCP(HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. | | Admin Network | Hybrid Cloud Manager | SSH | 22 | Administrator SSH access to Hybrid Cloud Manager. | | HCX Manager | Cloud Gateway | TCP(HTTPS) | 8123 | Send host-based replication service instructions to the Hybrid Cloud Gateway. |
The subnets:
| Cloud Gateway | ESXi Hosts | TCP | 80,902 | Management and OVF deployment. | | Cloud Gateway (local)| Cloud Gateway (remote) | UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. | | Cloud Gateway (local) | Cloud Gateway (remote) | UDP | 500 | Required for IPSEC<br> Internet key exchange (ISAKMP) for the bidirectional tunnel. |
-| On-premises vCenter network | Private Cloud management network | TCP | 8000 | vMotion of VMs from on-premises vCenter to Private Cloud vCenter |
+| On-premises vCenter Server network | Private Cloud management network | TCP | 8000 | vMotion of VMs from on-premises vCenter Server to Private Cloud vCenter Server |
## DHCP and DNS resolution considerations
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add a network segment in Azure VMware Solution
-description: Learn how to add a network segment to use for virtual machines (VMs) in vCenter.
+description: Learn how to add a network segment to use for virtual machines (VMs) in vCenter Server.
Last updated 07/16/2021
Last updated 07/16/2021
# Tutorial: Add a network segment in Azure VMware Solution
-After deploying Azure VMware Solution, you can configure an NSX-T network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manger, and vCenter. NSX-T comes pre-provisioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
+After deploying Azure VMware Solution, you can configure an NSX-T network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manager, and vCenter Server. NSX-T Data Center comes pre-provisioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
>[!TIP] >The Azure portal presents a simplified view of NSX-T operations a VMware administrator needs regularly and targeted at users not familiar with NSX-T Manager.
In this tutorial, you learn how to:
## Prerequisites
-An Azure VMware Solution private cloud with access to the vCenter and NSX-T Manager interfaces. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
+An Azure VMware Solution private cloud with access to the vCenter Server and NSX-T Manager interfaces. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
## Use Azure portal to add an NSX-T segment
An Azure VMware Solution private cloud with access to the vCenter and NSX-T Mana
## Use NSX-T Manager to add network segment
-The virtual machines (VMs) created in vCenter are placed onto the network segments created in NSX-T and are visible in vCenter.
+The virtual machines (VMs) created in vCenter Server are placed onto the network segments created in NSX-T and are visible in vCenter Server.
[!INCLUDE [add-network-segment-steps](includes/add-network-segment-steps.md)]
Verify the presence of the new network segment. In this example, **ls01** is the
:::image type="content" source="media/nsxt/nsxt-new-segment-overview-2.png" alt-text="Screenshot showing the confirmation and status of the new network segment is present in NSX-T.":::
-1. In vCenter, select **Networking** > **SDDC-Datacenter**.
+1. In vCenter Server, select **Networking** > **SDDC-Datacenter**.
- :::image type="content" source="media/nsxt/vcenter-with-ls01-2.png" alt-text="Screenshot showing the confirmation that the new network segment is present in vCenter.":::
+ :::image type="content" source="media/nsxt/vcenter-with-ls01-2.png" alt-text="Screenshot showing the confirmation that the new network segment is present in vCenter Server.":::
## Next steps
-In this tutorial, you created an NSX-T network segment to use for VMs in vCenter.
+In this tutorial, you created an NSX-T network segment to use for VMs in vCenter Server.
You can now:
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 02/23/2022 Last updated : 04/01/2022
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Possible causes** | Azure Backup triggers an auto-heal Full backup to resolve **UserErrorHANALSNValidationFailure**. While this auto-heal backup is in progress, all the log backups triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**.<br>Once the auto-heal Full backup is complete, logs and all other backups start working as expected.</br> **Recommended action** | Wait for the auto-heal Full backup to complete before you trigger a new Full/delta backup.
-### UserErrorHanaPreScriptNotRun
+### Environment pre-registration script run error
+
+#### UserErrorHanaPreScriptNotRun
+
+#### UserErrorPreregistrationScriptNotRun
**Error message** | `Pre-registration script not run.` | --
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Possible causes** | System databases restore failed as the **&lt;sid&gt;adm** user environment couldn't find the **HDBsettings.sh** file to trigger restore. **Recommended action** | Work with the SAP HANA team to fix this issue.<br><br>If HXE is the SID, ensure that environment variable HOME is set to _/usr/sap/HXE/home_ as **sid-adm** user.
+### UserErrorInsufficientSpaceOnSystemDriveForExtensionMetadata
+
+**Error message** | `Insufficient space on HANA machine to perform Configure Backup, Backup or Restore activities.`
+- | --
+**Possible causes** | The disk space on your HANA machine is almost full or full causing the Configure Backup, Backup, or Restore activitie(s) to fail.
+**Recommended action** | Check the disk space on your HANA machine to ensure that there is enough space for the Configure Backup, Backup, or Restore activitie(s) to complete successfully.
+ ### CloudDosAbsoluteLimitReached **Error message** | `Operation is blocked as you have reached the limit on number of operations permitted in 24 hours.` |
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 02/03/2022 Last updated : 04/01/2022
The following table lists the various alternatives you can use for establishing
| NSG service tags | Easier to manage as range changes are automatically merged <br><br> No additional costs | Can be used with NSGs only <br><br> Provides access to the entire service | | Azure Firewall FQDN tags | Easier to manage since the required FQDNs are automatically managed | Can be used with Azure Firewall only | | Allow access to service FQDNs/IPs | No additional costs <br><br> Works with all network security appliances and firewalls | A broad set of IPs or FQDNs may be required to be accessed |
-| Use an HTTP proxy | Single point of internet access to VMs | Additional costs to run a VM with the proxy software |
| [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) | Can be used for Azure Storage (= Recovery Services vault). <br><br> Provides large benefit to optimize performance of data plane traffic. | CanΓÇÖt be used for Azure AD, Azure Backup service. | | Network Virtual Appliance | Can be used for Azure Storage, Azure AD, Azure Backup service. <br><br> **Data plane** <ul><li> Azure Storage: `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net` </li></ul> <br><br> **Management plane** <ul><li> Azure AD: Allow access to FQDNs mentioned in sections 56 and 59 of [Microsoft 365 Common and Office Online](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). </li><li> Azure Backup service: `.backup.windowsazure.com` </li></ul> <br>Learn more about [Azure Firewall service tags](../firewall/fqdn-tags.md). | Adds overhead to data plane traffic and decrease throughput/performance. |
You can also use the following FQDNs to allow access to the required services fr
#### Use an HTTP proxy server to route traffic > [!NOTE]
-> Currently, there is no proxy support for SAP HANA. Please consider other options such as private end points if you wish to remove outbound connectivity requirements for database backups via Azure backup in HANA VMs.
+> Currently, we only support HTTP Proxy for Azure Active Directory (Azure AD) traffic for SAP HANA. If you need to remove outbound connectivity requirements (for Azure Backup and Azure Storage traffic) for database backups via Azure Backup in HANA VMs, use other options, such as private endpoints.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Follow these steps:
>- The support for Enhanced policy is available in all Azure public regions, and not in US Sovereign regions. >- We support Enhanced policy configuration through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported. >- For hourly backups, the last backup of the day is transferred to vault. If backup fails, the first backup of the next day is transferred to vault.
->- Enhanced policy can be only availed for unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy.
+>- Enhanced policy is only available to unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy.
## Next steps - [Run a backup immediately](./backup-azure-vms-first-look-arm.md#run-a-backup-immediately) - [Verify Backup job status](./backup-azure-arm-vms-prepare.md#verify-backup-job-status)-- [Restore Azure virtual machines](./backup-azure-arm-restore-vms.md#restore-disks)
+- [Restore Azure virtual machines](./backup-azure-arm-restore-vms.md#restore-disks)
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 03/31/2022 Last updated : 04/01/2022
To restore the backup data as files instead of a database, choose **Restore as F
1. All the backup files associated with the selected restore point are dumped into the destination path. 1. Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential, and incremental).+
+ >[!Note]
+ >If you've selected **Restore to a point in time**, the log files (dumped to the target VM) may sometimes contain logs beyond the point-in-time chosen for restore. Azure Backup does this to ensure that log backups for all HANA services are available for consistent and successful restore to the chosen point-in-time.
+ 1. Move these restored files to the SAP HANA server where you want to restore them as a database. 1. Then follow these steps: 1. Set permissions on the folder / directory where the backup files are stored using the following command:
backup Tutorial Backup Sap Hana Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-sap-hana-db.md
Title: Tutorial - Back up SAP HANA databases in Azure VMs description: In this tutorial, learn how to back up SAP HANA databases running on Azure VM to an Azure Backup Recovery Services vault. Previously updated : 01/10/2022 Last updated : 04/01/2022
Running the pre-registration script performs the following functions:
* CATALOG READ: to read the backup catalog. * SAP_INTERNAL_HANA_SUPPORT: to access a few private tables. Only required for SDC and MDC versions below HANA 2.0 SPS04 Rev 46. This isn't required for HANA 2.0 SPS04 Rev 46 and above as we are getting the required information from public tables now with the fix from HANA team. * Then add a key to hdbuserstore for your custom Backup user for the HANA backup plug-in to handle all operations (database queries, restore operations, configuring, and running backup). Pass this custom Backup user key to the script as a parameter: `-bk CUSTOM_BACKUP_KEY_NAME` or `-backup-key CUSTOM_BACKUP_KEY_NAME`. _Note that the password expiry of this custom backup key could lead to backup and restore failures._
+* If your HANA `<sid>adm` user is an Active Directory (AD) user, create a *msawb* group in your AD and add the `<sid>adm` user to this group. You must now specify that `<sid>adm` is an AD user in the pre-registration script using the parameters: `-ad <SID>_ADM_USER or --ad-user <SID>_ADM_USER`.
>[!NOTE] > To learn what other parameters the script accepts, use the command `bash msawb-plugin-config-com-sap-hana.sh --help`
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Typically, a network share path, or path of a mounted Azure file share when spec
Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential and incremental).
+>[!Note]
+>If you've selected **Restore to a point in time**, the log files (dumped to the target VM) may sometimes contain logs beyond the point-in-time chosen for restore. Azure Backup does this to ensure that log backups for all HANA services are available for consistent and successful restore to the chosen point-in-time.
+ Move these restored files to the SAP HANA server where you want to restore them as a database. Then follow these steps to restore the database: 1. Set permissions on the folder / directory where the backup files are stored using the following command:
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Your applications and services can issue direct REST API calls or use one or mor
| **Batch REST** |[Azure REST API - Docs](/rest/api/batchservice/) |N/A |- |- | [Supported versions](/rest/api/batchservice/batch-service-rest-api-versioning) | | **Batch .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Batch/) |[Tutorial](tutorial-parallel-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | [Release notes](https://aka.ms/batch-net-dataplane-changelog) | | **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch/client) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
-| **Batch JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch/client) |[npm](https://www.npmjs.com/package/@azure/batch) |[Tutorial](batch-js-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/batch/batch) |
+| **Batch JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch) |[npm](https://www.npmjs.com/package/@azure/batch) |[Tutorial](batch-js-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/batch/batch) |
| **Batch Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Java) | [Readme](https://github.com/Azure/azure-batch-sdk-for-java)| ## Batch Management APIs
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
new PoolAddParameter
RelativeMountPath = "cifsmountpoint", Source = "source", Password = "StorageAccountKey",
- MountOptions = "-o vers=3.0,dir_mode=0777,file_mode=0777,serverino"
+ MountOptions = "-o vers=3.0,dir_mode=0777,file_mode=0777,serverino,domain=MyDomain"
}, } }
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
## Limitations
-* For agent-based faults, the machine must have access to the following **HTTPS endpoints**:
- * http://agentcommunicationservice-frontdoor-canary.trafficmanager.net
+* For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service:
+ * Regional endpoints to allowlist are listed [in this article](chaos-studio-permissions-security.md#network-security).
* If sending telemetry data to Application Insights, the IPs [in this document](../azure-monitor/app/ip-addresses.md) are also required.
-* If running an experiment that makes use of the Chaos Agent, the VM must run one of the following **operating systems**:
+* If running an experiment that makes use of the Chaos Agent, the virtual machine must run one of the following **operating systems**:
* Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2 * Redhat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8 Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS * The Chaos Agent is not tested against custom Linux distributions, hardened Linux distributions (for example, FIPS or SELinux)
During the public preview of Azure Chaos Studio, there are a few limitations and
* **MacOS:** Safari, Google Chrome, Firefox ## Known issues
-* The **Enable agent-based target** experience in the Azure portal does not also assign the user-assigned managed identity to the virtual machine or virtual machine scale set. This must be done manually or an agent-based fault in an experiment will fail with the error: "Verify that the target is correctly onboarded and proper read permissions are provided to the experiment msi." This can be done after enabling the agent-based target, but may require a reboot.
-* Onboarding a target in the Azure portal may fail if you navigate away from the Targets view before onboarding completes.
-* When creating an experiment, after clicking **Review + create** there is a delay before the created experiment appears in the experiment list and users must refresh to see the experiment in the list.
* When picking target resources for an agent-based fault in the experiment designer, it is possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
To assign these permissions granularly, you can [create a custom role](../role-b
All user interactions with Chaos Studio happen through Azure Resource Manager. If a user starts an experiment, the experiment may interact with endpoints other than Resource Manager depending on the fault. * Service-direct faults - Most service-direct faults are executed through Azure Resource Manager. Target resources do not require any allowlisted network endpoints. * Service-direct AKS Chaos Mesh faults - Service-direct faults for Azure Kubernetes Service that use Chaos Mesh require access that the AKS cluster have a publicly-exposed Kubernetes API server. [You can learn how to limit AKS network access to a set of IP ranges here.](../aks/api-server-authorized-ip-ranges.md)
-* Agent-based faults - Agent-based faults require agent access to the Chaos Studio agent service. A virtual machine or virtual machine scale set must have outbound access to http://agentcommunicationservice-frontdoor-canary.trafficmanager.net for the agent to connect successfully.
+* Agent-based faults - Agent-based faults require agent access to the Chaos Studio agent service. A virtual machine or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`, replacing `<region>` with the region where your virtual machine is deployed, for example, `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a virtual machine in East US.
Azure Chaos Studio does not support Service Tags or Private Link.
cognitive-services How To Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-create-project.md
Creating project is the first step toward building a model.
## Create a project
-1. In the [Custom Translator](https://portal.customtranslator.azure.ai) portal, select **Create project**.
+1. In the [Custom Translator](https://legacy.portal.customtranslator.azure.ai/) legacy portal, select **Create project**.
![Create project](media/how-to/how-to-create-project.png)
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/concepts/data-formats.md
Title: Custom text classification data formats
-description: Learn about the data formats accepted by custom entity extraction.
+description: Learn about the data formats accepted by custom text classification.
Your tags file should be in the `json` format below.
* `documents`: An array of tagged documents. * `location`: The path of the file. The file has to be in root of the storage container. * `language`: Language of the file. Use one of the [supported culture locales](../language-support.md).
- * `classifiers`: Array of classifier objects assigned to the file. If you're working on a single classification project, there should be one classifier per file only.
+ * `classifiers`: Array of classifier objects assigned to the file. If you're working on a single label classification project, there should be one classifier per file only.
## Next steps
cognitive-services Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/concepts/evaluation.md
Title: Custom classification evaluation metrics
+ Title: Custom text classification evaluation metrics
-description: Learn about evaluation metrics in custom entity extraction.
+description: Learn about evaluation metrics in custom text classification.
So what does it actually mean to have a high precision or a high recall for a ce
| High | Low | The model predicts this class well, however it is with low confidence. This may be because this class is over represented in the dataset so consider balancing your data distribution. | | Low | Low | This class is poorly handled by the model where it is not usually predicted and when it is, it is not with high confidence. |
-Custom classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
+Custom text classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
For example, if your scenario involves processing technical support tickets, predicting the wrong class could cause it to be forwarded to the wrong department/team. In this example, you should consider making your system more sensitive to false positives, and precision would be a more relevant metric for evaluation.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/fail-over.md
Title: Back up and recover your custom classification models
+ Title: Back up and recover your custom text classification models
-description: Learn how to save and recover your custom classification models.
+description: Learn how to save and recover your custom text classification models.
When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync custom models across regions.
-If your app or business depends on the use of a custom classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+If your app or business depends on the use of a custom text classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/call-api.md#deploy-your-model) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
Use the url from the `resultUrl` key in the body to view the exported assets fro
### Get export results
-Submit a **GET** request using the `{RESULT-URL}` you recieved from the previous step to view the results of the export job.
+Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
#### Headers
Use the following header to authenticate your request.
## Deploy your model
-This is te step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
> [!TIP] > Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/faq.md
After deploying your model, you [call the prediction API](how-to/call-api.md), u
## Data privacy and security
-Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://aka.ms/ct-authoring-swagger).
+Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://aka.ms/ct-authoring-swagger).
-Your data is only stored in your Azure Storage account. Custom classification only has access to read from it during training.
+Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training.
## How to clone my project?
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
Title: How to submit custom classification tasks
+ Title: How to submit custom text classification tasks
description: Learn about sending a request for custom text classification.
After you're satisfied with your model, and made any necessary improvements, you
## Prerequisites
-* [A custom classification project](create-project.md) with a configured Azure blob storage account,
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account. * [Tagged data](tag-data.md) and successfully [trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
First you will need to get your resource key and endpoint
:::image type="content" source="../media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../media/get-endpoint-azure.png":::
-### Submit text classification task
+### Submit a custom text classification task
1. Start constructing a POST request by updating the following URL with your endpoint.
First you will need to get your resource key and endpoint
3. In the JSON body of your request, you will specify The documents you're inputting for analysis, and the parameters for the custom entity recognition task. `project-name` is case-sensitive. > [!tip]
- > See the [quickstart article](../quickstart.md?pivots=rest-api#submit-text-classification-task) and [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) for more information about the JSON syntax.
+ > See the [quickstart article](../quickstart.md?pivots=rest-api#submit-a-custom-text-classification-task) and [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) for more information about the JSON syntax.
```json {
First you will need to get your resource key and endpoint
* [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_category_classify.py)
- Multiple label classification:
+ Multi label classification:
* [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample11_MultiCategoryClassify.md) * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java) * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/create-project.md
Title: How to create custom text classification projects
-description: Learn about the steps for using Azure resources with custom classification.
+description: Learn about the steps for using Azure resources with custom text classification.
Before you start using custom text classification, you will need several things:
## Azure resources
-Before you start using custom classification, you will need an Azure Language resource. We recommend following the steps below for creating your resource in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
+Before you start using custom text classification, you will need an Azure Language resource. We recommend following the steps below for creating your resource in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
You also will need an Azure storage account where you will upload your `.txt` files that will be used to train a model to classify text.
You also will need an Azure storage account where you will upload your `.txt` fi
If it's your first time logging in, you'll see a window in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**. > [!IMPORTANT]
-> * To use Custom Text Classification, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
+> * To use custom text classification, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
> * Be sure to to select **Managed Identity** when you create a resource. :::image type="content" source="../../media/create-new-resource-small.png" alt-text="A screenshot showing the resource creation screen in Language Studio." lightbox="../../media/create-new-resource.png":::
-To use custom classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
+To use custom text classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
You can use an existing Language resource to get started with custom text classi
|Pricing tier | Make sure your existing resource is in the Standard (**S**) pricing tier. Only this pricing tier is supported. If your resource doesn't use this pricing tier, you will need to create a new resource. | |Managed identity | Make sure that the resource-managed identity setting is enabled. Otherwise, read the next section. |
-To use custom classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
+To use custom text classification, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/design-schema.md
Title: How to prepare data and define a schema
-description: Learn about data selection, preparation, and creating a schema for custom classification projects.
+description: Learn about data selection, preparation, and creating a schema for custom text classification projects.
# How to prepare data and define a schema
-In order to create a custom classification model, you will need quality data to train it. This article covers how you should approach selecting and preparing your data, along with defining a schema. A schema defines the classes that you need your model to classify your text into at runtime, and is the first step of [developing a custom classification application](../overview.md#project-development-lifecycle).
+In order to create a custom text classification model, you will need quality data to train it. This article covers how you should approach selecting and preparing your data, along with defining a schema. A schema defines the classes that you need your model to classify your text into at runtime, and is the first step of [developing a custom classification application](../overview.md#project-development-lifecycle).
## Data selection
The schema defines the classes that you need your model to classify your text in
## Next steps
-If you haven't already, create a custom classification project. If it's your first time using custom classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
+If you haven't already, create a custom text classification project. If it's your first time using custom text classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/improve-model.md
After you've trained your model you reviewed its evaluation details, you can dec
To optionally improve a model, you will need to have:
-* [A custom classification project](create-project.md) with a configured Azure blob storage account,
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account. * [Tagged data](tag-data.md) to successfully [train a model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/tag-data.md
Use the following steps to tag your data
:::image type="content" source="../media/tag-single.png" alt-text="A screenshot showing the single label classification menu" lightbox="../media/tag-single.png":::
- * **Multiple label classification**: your file can be tagged with multiple classes, you can do so by checking all applicable check boxes next to the classes you want to tag this file with.
+ * **Multi label classification**: your file can be tagged with multiple classes, you can do so by checking all applicable check boxes next to the classes you want to tag this file with.
- :::image type="content" source="../media/tag-multi.png" alt-text="A screenshot showing the multiple label classification menu" lightbox="../media/tag-multi.png":::
+ :::image type="content" source="../media/tag-multi.png" alt-text="A screenshot showing the multi label classification menu" lightbox="../media/tag-multi.png":::
While tagging, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on Save tags button at the top of the page.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/train-model.md
Title: How to train your custom classification model - Azure Cognitive Services
+ Title: How to train your custom text classification model - Azure Cognitive Services
description: Learn about how to train your model for custom text classification.
-# How to train a text classification model
+# How to train a custom text classification model
Training is the process where the model learns from your [tagged data](tag-data.md). After training is completed, you will be able to [use the model evaluation metrics](../how-to/view-model-evaluation.md) to determine if you need to [improve your model](../how-to/improve-model.md).
See the [application development lifecycle](../overview.md#project-development-l
## Data split
-Before starting the training process, files in your dataset are divided into three groups at random:
+Before you start the training process, files in your dataset are divided into three groups at random:
* The **training set** contains 80% of the files in your dataset. It is the main set that is used to train the model.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/view-model-evaluation.md
Title: View a custom classification model evaluation - Azure Cognitive Services
+ Title: View a custom text classification model evaluation - Azure Cognitive Services
-description: Learn how to view the evaluation scores for a custom classification model
+description: Learn how to view the evaluation scores for a custom text classification model
# View the model evaluation
-Reviewing model evaluation is an important step in developing a custom classification model. It helps you learn how well your model is performing, and gives you an idea about how it will perform when used in production.
+Reviewing model evaluation is an important step in developing a custom text classification model. It helps you learn how well your model is performing, and gives you an idea about how it will perform when used in production.
## Prerequisites Before you train your model you need:
-* [A custom classification project](create-project.md) with a configured Azure blob storage account,
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account. * [Tagged data](tag-data.md) * A successfully [trained model](train-model.md)
The evaluation process uses the trained model to predict user-defined classes fo
Under the **Test set confusion matrix**, you can find the confusion matrix for the model. > [!NOTE]
-> The confusion matrix is currently not supported for multiple label classification projects.
+> The confusion matrix is currently not supported for multi label classification projects.
**Single label classification**
-<!-- **Multiple Label Classification**
+<!-- **Multi label classification**
## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/language-support.md
Title: Language support in custom text classification
-description: Learn about which languages are supported by custom entity extraction.
+description: Learn about which languages are supported by custom text classification.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/overview.md
Title: What is custom classification (preview) in Azure Cognitive Services for Language?
+ Title: What is custom text classification (preview) in Azure Cognitive Services for Language?
description: Learn how use custom text classification.
Custom text classification is one of the features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text classification tasks.
-Custom text classification is offered as part of the custom features within Azure Cognitive for Language. This feature enables its users to build custom AI models to classify text into custom categories pre-defined by the user. By creating a Custom classification project, developers can iteratively tag data, train, evaluate, and improve model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+Custom text classification is offered as part of the custom features within Azure Cognitive for Language. This feature enables its users to build custom AI models to classify text into custom categories pre-defined by the user. By creating a custom text classification project, developers can iteratively tag data, train, evaluate, and improve model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
Custom text classification supports two types of projects:
This documentation contains the following article types:
### Automatic emails/ticket triage
-Support centers of all types receive thousands to hundreds of thousands of emails/tickets containing unstructured, free-form text, and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. However, email triage at this scale involving people to review and route to the right departments takes time and precious resources. Custom classification can be used to analyze incoming text triage and categorize the content to be automatically routed to the relevant department to take necessary actions.
+Support centers of all types receive thousands to hundreds of thousands of emails/tickets containing unstructured, free-form text, and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. However, email triage at this scale involving people to review and route to the right departments takes time and precious resources. Custom text classification can be used to analyze incoming text triage and categorize the content to be automatically routed to the relevant department to take necessary actions.
### Knowledge mining to enhance/enrich semantic search
-Search is foundational to apps that display text content to users, with common scenarios including: catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries are looking into building a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom classification to categorize text into classes that are relevant to their industry. The predicted classes could be used to enrich the indexing of the file for a more customized search experience.
+Search is foundational to apps that display text content to users, with common scenarios including: catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries are looking into building a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom text classification to categorize text into classes that are relevant to their industry. The predicted classes could be used to enrich the indexing of the file for a more customized search experience.
## Project development lifecycle
-Creating a custom classification project typically involves several different steps.
+Creating a custom text classification project typically involves several different steps.
:::image type="content" source="media/development-lifecycle.png" alt-text="The development lifecycle" lightbox="media/development-lifecycle.png":::
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/quickstart.md
Use this article to get started with creating a custom text classification proje
## Next steps
-After you've created a text classification model, you can:
+After you've created a custom text classification model, you can:
* [Use the runtime API to classify text](how-to/call-api.md)
-When you start to create your own text classification projects, use the how-to articles to learn more about developing your model in greater detail:
+When you start to create your own custom text classification projects, use the how-to articles to learn more about developing your model in greater detail:
* [Data selection and schema design](how-to/design-schema.md) * [Tag data](how-to/tag-data.md)
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/tutorials/cognitive-search.md
Title: Enrich a Cognitive Search index with custom classes
-description: Improve your cognitive search indices using custom classifications
+description: Improve your cognitive search indices using custom text classification
-# Tutorial: Enrich Cognitive search index with custom classifications from your data
+# Tutorial: Enrich Cognitive search index with custom classes from your data
-With the abundance of electronic documents within the enterprise, the problem of search through them becomes a tiring and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) helps with searching through your files based on their indices. Custom classification helps in enriching the indexing of these files by classifying them into your custom classes.
+With the abundance of electronic documents within the enterprise, the problem of search through them becomes a tiring and expensive task. [Azure Cognitive Search](../../../../search/search-create-service-portal.md) helps with searching through your files based on their indices. Custom text classification helps in enriching the indexing of these files by classifying them into your custom classes.
In this tutorial, you will learn how to:
-* Create a custom classification project.
+* Create a custom text classification project.
* Publish Azure function. * Add Index to your Azure Cognitive search.
In this tutorial, you will learn how to:
* Download this [sample data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/language-service/Custom%20text%20classification/Custom%20multi%20classification%20-%20movies%20summary.zip).
-## Create a custom classification project through Language studio
+## Create a custom text classification project through Language studio
[!INCLUDE [Create a project using Language Studio](../includes/create-project.md)]
If you deploy your model through Language Studio, your `deployment-name` will be
:::image type="content" source="../../media/azure-portal-resource-credentials.png" alt-text="A screenshot showing the key and endpoint screen in the Azure portal" lightbox="../../media/azure-portal-resource-credentials.png":::
-6. Get your custom classification project secrets
+6. Get your custom text classification project secrets
1. You will need your **project-name**, project names are case-sensitive.
Replace `name-your-index-here` with the index name that appears in your Cognitiv
## Next steps
-* [Search your app with with the Cognitive Search SDK](../../../../search/search-howto-dotnet-sdk.md#run-queries)
+* [Search your app with with the Cognitive Search SDK](../../../../search/search-howto-dotnet-sdk.md#run-queries)
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
|Number of participants in thread|250 | |Batch of participants - CreateThread|200 | |Batch of participants - AddParticipant|200 |
+|Page size - ListMessages|200 |
## Voice and video calling
cosmos-db Migrate Data Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-arcion.md
+
+ Title: Migrate data from Cassandra to Azure Cosmos DB Cassandra API using Arcion
+description: Learn how to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Arcion.
+++++ Last updated : 04/02/2022+++
+# Migrate data from Cassandra to Azure Cosmos DB Cassandra API account using Arcion
+
+Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for many reasons such as:
+
+* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions.
+
+* **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
+
+* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB Cassandra API with trivial changes.
+
+There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Arcion.
+
+## Benefits using Arcion for migration
+
+ArcionΓÇÖs migration solution follows a step by step approach to migrate complex operational workloads. The following are some of the key aspects of ArcionΓÇÖs zero-downtime migration plan:
+
+* It offers automatic migration of business logic (tables, indexes, views) from Apache Cassandra database to Azure Cosmos DB. You donΓÇÖt have to create schemas manually.
+
+* Arcion offers high-volume and parallel database replication. It enables both the source and target platforms to be in-sync during the migration by using a technique called Change-Data-Capture (CDC). By using CDC, Arcion continuously pulls a stream of changes from the source database(Apache Cassandra) and applies it to the destination database(Azure Cosmos DB).
+
+* It's fault-tolerant and provides exactly once delivery of data even during a hardware or software failure in the system.
+
+* It secures the data during transit using security methodologies like TLS, encryption.
+
+## Steps to migrate data
+
+This section describes the steps required to set up Arcion and migrates data from Apache Cassandra database to Azure Cosmos DB.
+
+1. From the computer where you plan to install the Arcion replicant, add a security certificate. This certificate is required by the Arcion replicant to establish a TLS connection with the specified Azure Cosmos DB account. You can add the certificate with the following steps:
+
+ ```bash
+ wget https://cacert.omniroot.com/bc2025.crt
+ mv bc2025.crt bc2025.cer
+ keytool -keystore $JAVA_HOME/lib/security/cacerts -importcert -alias bc2025ca -file bc2025.cer
+ ```
+
+1. You can get the Arcion installation and the binary files either by requesting a demo on the [Arcion website](https://www.arcion.io). Alternatively, you can also send an [email](mailto:support@arcion.io) to the team.
+
+ :::image type="content" source="./media/migrate-data-arcion/arcion-replicant-download.png" alt-text="Arcion replicant tool download":::
+
+ :::image type="content" source="./media/migrate-data-arcion/replicant-files.png" alt-text="Arcion replicant files":::
+
+1. From the CLI terminal, set up the source database configuration. Open the configuration file using **`vi conf/conn/cassandra.yml`** command and add a comma-separated list of IP addresses of the Cassandra nodes, port number, username, password, and any other required details. The following is an example of contents in the configuration file:
+
+ ```bash
+ type: CASSANDRA
+
+ host: 172.17.0.2
+ port: 9042
+
+ username: 'cassandra'
+ password: 'cassandra'
+
+ max-connections: 30
+
+ ```
+
+ :::image type="content" source="./media/migrate-data-arcion/open-connection-editor-cassandra.png" alt-text="Open Cassandra connection editor":::
+
+ :::image type="content" source="./media/migrate-data-arcion/cassandra-connection-configuration.png" alt-text="Cassandra connection configuration":::
+
+ After filling out the configuration details, save and close the file.
+
+1. Optionally, you can set up the source database filter file. The filter file specifies which schemas or tables to migrate. Open the configuration file using **`vi filter/cassandra_filter.yml`** command and enter the following configuration details:
+
+ ```bash
+
+ allow:
+ - schema: ΓÇ£io_arcionΓÇ¥
+ Types: [TABLE]
+ ```
+
+ After filling out the database filter details, save and close the file.
+
+1. Next you will set up the destination database configuration. Before you define the configuration, [create an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account) and then create a Keyspace, and a table to store the migrated data. Because you're migrating from Apache Cassandra to Cassandra API in Azure Cosmos DB, you can use the same partition key that you've used with Apache cassandra.
+
+1. Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. For example, you can increase the throughput to 100000 RUs. Scaling the throughput before starting the migration will help you to migrate your data in less time.
+
+ :::image type="content" source="./media/migrate-data-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos container throughout":::
+
+ Decrease the throughput after the migration is complete. Based on the amount of data stored and RUs required for each operation, you can estimate the throughput required after data migration. To learn more on how to estimate the RUs required, see [Provision throughput on containers and databases](../set-throughput.md) and [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles.
+
+1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos account from the **Connection String** pane. You'll use these values in the configuration file.
+
+1. From the CLI terminal, set up the destination database configuration. Open the configuration file using **`vi conf/conn/cosmosdb.yml`** command and add a comma-separated list of host URI, port number, username, password, and other required parameters. The following example shows the contents of the configuration file:
+
+ ```bash
+ type: COSMOSDB
+
+ host: '<Azure Cosmos accountΓÇÖs Contact point>'
+ port: 10350
+
+ username: 'arciondemo'
+ password: '<Your Azure Cosmos accountΓÇÖs primary password>'
+
+ max-connections: 30
+ ```
+
+1. Next migrate the data using Arcion. You can run the Arcion replicant in **full** or **snapshot** mode:
+
+ * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Apache Cassandra system. If it detects any changes, they're replicated on the target Azure Cosmos account in real time.
+
+ * **Snapshot mode** ΓÇô In this mode, you can perform schema migration and one-time data replication. Real-time replication isnΓÇÖt supported with this option.
+
+ By using the above two modes, migration can be performed with zero downtime.
+
+1. To migrate data, from the Arcion replicant CLI terminal, run the following command:
+
+ ```bash
+ ./bin/replicant full conf/conn/cassandra.yaml conf/conn/cosmosdb.yaml --filter filter/cassandra_filter.yaml --replace-existing
+ ```
+
+ The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos database.
+
+ :::image type="content" source="./media/migrate-data-arcion/cassandra-data-migration-output.png" alt-text="Cassandra data migration output":::
++
+1. Because you've used full mode for migration, you can perform operations such as insert, update, or delete data on the source Apache Cassandra database. Later validate that they're replicated real time on the target Azure Cosmos database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos container.
+
+1. You can stop the replicant any point and restart it with **--resume** switch. The replication resumes from the point it has stopped without compromising on data consistency. The following command shows how to use the resume switch.
+
+ ```bash
+ ./bin/replicant full conf/conn/cassandra.yaml conf/conn/cosmosdb.yaml --filter filter/cassandra_filter.yaml --replace-existing --resume
+ ```
+
+To learn more on the data migration to destination, real-time migration, see the [Arcion replicant demo](https://www.youtube.com/watch?v=fsUhF9LUZmM).
+
+## Next steps
+
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Partition key best practices](../partitioning-overview.md#choose-partitionkey)
+* [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles
cosmos-db Oracle Migrate Cosmos Db Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/oracle-migrate-cosmos-db-arcion.md
+
+ Title: Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion
+description: Learn how to migrate data from Oracle database to Azure Cosmos DB Cassandra API using Arcion.
+++++ Last updated : 04/02/2022+++
+# Migrate data from Oracle to Azure Cosmos DB Cassandra API account using Arcion
+
+Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads that are running on Oracle for reasons such as:
+
+* **Better scalability and availability:** It eliminates single points of failure, better scalability, and availability for your applications.
+
+* **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable Oracle licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
+
+* **No overhead of managing and monitoring:** As a fully managed cloud service, Azure Cosmos DB removes the overhead of managing and monitoring a myriad of settings.
+
+There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Oracle database to Azure Cosmos DB Cassandra API using Arcion.
+
+## Benefits using Arcion for migration
+
+ArcionΓÇÖs migration solution follows a step by step approach to migrate complex operational workloads. The following are some of the key aspects of ArcionΓÇÖs zero-downtime migration plan:
+
+* It offers automatic migration of business logic (tables, indexes, views) from Oracle database to Azure Cosmos DB. You donΓÇÖt have to create schemas manually.
+
+* Arcion offers high-volume and parallel database replication. It enables both the source and target platforms to be in-sync during the migration by using a technique called Change-Data-Capture (CDC). By using CDC, Arcion continuously pulls a stream of changes from the source database(Oracle) and applies it to the destination database(Azure Cosmos DB).
+
+* It's fault-tolerant and guarantees exactly once delivery of data even during a hardware or software failure in the system.
+
+* It secures the data during transit using security methodologies like TLS/SSL, encryption.
+
+* It offers services to convert complex business logic written in PL/SQL to equivalent business logic in Azure Cosmos DB.
+
+## Steps to migrate data
+
+This section describes the steps required to setup Arcion and migrates data from Oracle database to Azure Cosmos DB.
+
+1. From the computer where you plan to install the Arcion replicant, add a security certificate. This certificate is required by the Arcion replicant to establish a TLS connection with the specified Azure Cosmos DB account. You can add the certificate with the following steps:
+
+ ```bash
+ wget https://cacert.omniroot.com/bc2025.crt
+ mv bc2025.crt bc2025.cer
+ keytool -keystore $JAVA_HOME/lib/security/cacerts -importcert -alias bc2025ca -file bc2025.cer
+ ```
+
+1. ou can get the Arcion installation and the binary files either by requesting a demo on the [Arcion website](https://www.arcion.io). Alternatively, you can also send an [email](mailto:support@arcion.io) to the team.
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/arcion-replicant-download.png" alt-text="arcion replicant tool download":::
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/replicant-files.png" alt-text="Arcion replicant files":::
+
+1. From the CLI terminal, set up the source database configuration. Open the configuration file using **`vi conf/conn/oracle.yml`** command and add a comma-separated list of IP addresses of the oracle nodes, port number, username, password, and any other required details. The following code shows an example configuration file:
+
+ ```bash
+ type: ORACLE
+
+ host: localhost
+ port: 53546
+
+ service-name: IO
+
+ username: '<Username of your Oracle database>'
+ password: '<Password of your Oracle database>'
+
+ conn-cnt: 30
+ use-ssl: false
+ ```
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/open-connection-editor-oracle.png" alt-text="Open Oracle connection editor":::
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/oracle-connection-configuration.png" alt-text="Oracle connection configuration":::
+
+ After filling out the configuration details, save and close the file.
+
+1. Optionally, you can set up the source database filter file. The filter file specifies which schemas or tables to migrate. Open the configuration file using **`vi filter/oracle_filter.yml`** command and enter the following configuration details:
+
+ ```bash
+
+ allow:
+ - schema: ΓÇ£io_arcionΓÇ¥
+ Types: [TABLE]
+ ```
+
+ After filling out the database filter details, save and close the file.
+
+1. Next you will set up the configuration of the destination database. Before you define the configuration, [create an Azure Cosmos DB Cassandra API account](manage-data-dotnet.md#create-a-database-account). [Choose the right partition key](../partitioning-overview.md#choose-partitionkey) from your data and then create a Keyspace, and a table to store the migrated data.
+
+1. Before migrating the data, increase the container throughput to the amount required for your application to migrate quickly. For example, you can increase the throughput to 100000 RUs. Scaling the throughput before starting the migration will help you to migrate your data in less time.
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/scale-throughput.png" alt-text="Scale Azure Cosmos container throughout":::
+
+ You must decrease the throughput after the migration is complete. Based on the amount of data stored and RUs required for each operation, you can estimate the throughput required after data migration. To learn more on how to estimate the RUs required, see [Provision throughput on containers and databases](../set-throughput.md) and [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles.
+
+1. Get the **Contact Point, Port, Username**, and **Primary Password** of your Azure Cosmos account from the **Connection String** pane. You will use these values in the configuration file.
+
+1. From the CLI terminal, set up the destination database configuration. Open the configuration file using **`vi conf/conn/cosmosdb.yml`** command and add a comma-separated list of host URI, port number, username, password, and other required parameters. The following is an example of contents in the configuration file:
+
+ ```bash
+ type: COSMOSDB
+
+ host: `<Azure Cosmos accountΓÇÖs Contact point>`
+ port: 10350
+
+ username: 'arciondemo'
+ password: `<Your Azure Cosmos accountΓÇÖs primary password>'
+
+ max-connections: 30
+ use-ssl: false
+ ```
+
+1. Next migrate the data using Arcion. You can run the Arcion replicant in **full** or **snapshot** mode:
+
+ * **Full mode** ΓÇô In this mode, the replicant continues to run after migration and it listens for any changes on the source Oracle system. If it detects any changes, they're replicated on the target Azure Cosmos account in real time.
+
+ * **Snapshot mode** ΓÇô In this mode, you can perform schema migration and one-time data replication. Real-time replication isnΓÇÖt supported with this option.
++
+ By using the above two modes, migration can be performed with zero downtime.
+
+1. To migrate data, from the Arcion replicant CLI terminal, run the following command:
+
+ ```bash
+ ./bin/replicant full conf/conn/oracle.yaml conf/conn/cosmosdb.yaml --filter filter/oracle_filter.yaml --replace-existing
+ ```
+
+ The replicant UI shows the replication progress. Once the schema migration and snapshot operation are done, the progress shows 100%. After the migration is complete, you can validate the data on the target Azure Cosmos database.
+
+ :::image type="content" source="./media/oracle-migrate-cosmos-db-arcion/oracle-data-migration-output.png" alt-text="Oracle data migration output":::
+
+1. Because you have used full mode for migration, you can perform operations such as insert, update, or delete data on the source Oracle database. Later you can validate that they're replicated real time on the target Azure Cosmos database. After the migration, make sure to decrease the throughput configured for your Azure Cosmos container.
+
+1. You can stop the replicant any point and restart it with **--resume** switch. The replication resumes from the point it has stopped without compromising on data consistency. The following command shows how to use the resume switch.
+
+ ```bash
+ ./bin/replicant full conf/conn/oracle.yaml conf/conn/cosmosdb.yaml --filter filter/oracle_filter.yaml --replace-existing --resume
+ ```
+
+To learn more on the data migration to destination, real-time migration, see the [Arcion replicant demo](https://www.youtube.com/watch?v=y5ZeRK5A-MI).
+
+## Next steps
+
+* [Provision throughput on containers and databases](../set-throughput.md)
+* [Partition key best practices](../partitioning-overview.md#choose-partitionkey)
+* [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Previously updated : 01/05/2021 Last updated : 04/02/2022
You can continue to insert more data into PostgreSQL and confirm that the record
* [Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect](kafka-connect.md) * [Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture](../../event-hubs/event-hubs-kafka-connect-debezium.md)
-* [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Blitzz](oracle-migrate-cosmos-db-blitzz.md)
+* [Migrate data from Oracle to Azure Cosmos DB Cassandra API using Arcion](oracle-migrate-cosmos-db-arcion.md)
* [Provision throughput on containers and databases](../set-throughput.md) * [Partition key best practices](../partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) articles
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-migrationchoices.md
Previously updated : 11/03/2021 Last updated : 04/02/2022 # Options to migrate your on-premises or cloud data to Azure Cosmos DB
If you need help with capacity planning, consider reading our [guide to estimati
|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.| |Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/>&bull;Azure Cosmos DB Cassandra API| Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.| |Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Cassandra API <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
-|Online|[Blitzz (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Blitzz website](https://www.blitzz.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Blitzz website](https://www.blitzz.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
+|Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
## Other APIs
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
Title: Use client-side encryption with Always Encrypted for Azure Cosmos DB
description: Learn how to use client-side encryption with Always Encrypted for Azure Cosmos DB Previously updated : 03/30/2022 Last updated : 04/04/2022
var path1 = new ClientEncryptionIncludedPath
Path = "/property1", ClientEncryptionKeyId = "my-key", EncryptionType = EncryptionType.Deterministic.ToString(),
- EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256
+ EncryptionAlgorithm = DataEncryptionAlgorithm.AeadAes256CbcHmacSha256
}; var path2 = new ClientEncryptionIncludedPath { Path = "/property2", ClientEncryptionKeyId = "my-key", EncryptionType = EncryptionType.Randomized.ToString(),
- EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256
+ EncryptionAlgorithm = DataEncryptionAlgorithm.AeadAes256CbcHmacSha256
}; await database.DefineContainer("my-container", "/partition-key") .WithClientEncryptionPolicy()
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-introduction.md
Azure Cosmos DB API for MongoDB implements the wire protocol for MongoDB. This i
MongoDB feature compatibility: Azure Cosmos DB API for MongoDB is compatible with the following MongoDB server versions:
+- [Version 4.2](feature-support-42.md)
- [Version 4.0](feature-support-40.md) - [Version 3.6](feature-support-36.md) - [Version 3.2](feature-support-32.md)
cosmos-db Sql Api Query Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-query-metrics.md
Previously updated : 01/06/2021 Last updated : 04/04/2022 ms.devlang: csharp
Queries that need to consult all partitions need higher latency, and can consume
To learn more about partitioning and partition keys, see [Partitioning in Azure Cosmos DB](../partitioning-overview.md). ### SDK and query options
-See [Performance Tips](performance-tips.md) and [Performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB. This includes using the latest SDKs, configuring platform-specific configurations like default number of connections, frequency of garbage collection, and using lightweight connectivity options like Direct/TCP.
--
-#### Max Item Count
-For queries, the value of `MaxItemCount` can have a significant impact on end-to-end query time. Each round trip to the server will return no more than the number of items in `MaxItemCount` (Default of 100 items). Setting this to a higher value (-1 is maximum, and recommended) will improve your query duration overall by limiting the number of round trips between server and client, especially for queries with large result sets.
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- MaxItemCount = -1,
- }).AsDocumentQuery();
-```
-
-#### Max Degree of Parallelism
-For queries, tune the `MaxDegreeOfParallelism` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxDegreeOfParallelism` controls the maximum number of parallel tasks, i.e., the maximum of partitions to be visited in parallel.
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- MaxDegreeOfParallelism = -1,
- EnableCrossPartitionQuery = true
- }).AsDocumentQuery();
-```
-
-LetΓÇÖs assume that
-* D = Default Maximum number of parallel tasks (= total number of processor in the client machine)
-* P = User-specified maximum number of parallel tasks
-* N = Number of partitions that needs to be visited for answering a query
-
-Following are implications of how the parallel queries would behave for different values of P.
-* (P == 0) => Serial Mode
-* (P == 1) => Maximum of one task
-* (P > 1) => Min (P, N) parallel tasks
-* (P < 1) => Min (N, D) parallel tasks
-
-For SDK release notes, and details on implemented classes and methods see [SQL SDKs](sql-api-sdk-dotnet.md)
+See [Query performance tips](performance-tips-query-sdk.md) and [Performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB using our SDKs.
### Network latency See [Azure Cosmos DB global distribution](tutorial-global-distribution-sql-api.md) for how to set up global distribution, and connect to the closest region. Network latency has a significant impact on query performance when you need to make multiple round-trips or retrieve a large result set from the query.
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-query-performance.md
description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB S
Previously updated : 02/16/2021 Last updated : 04/04/2022
This article provides examples that you can re-create by using the [nutrition da
Before reading this guide, it is helpful to consider common SDK issues that aren't related to the query engine. -- Follow these [SDK Performance tips](performance-tips.md).
- - [.NET SDK troubleshooting guide](troubleshoot-dot-net-sdk.md)
- - [Java SDK troubleshooting guide](troubleshoot-java-sdk-v4-sql.md)
-- The SDK allows setting a `MaxItemCount` for your queries but you can't specify a minimum item count.
- - Code should handle any page size, from zero to the `MaxItemCount`.
+- Follow these [SDK Performance tips for query](performance-tips-query-sdk.md).
- Sometimes queries may have empty pages even when there are results on a future page. Reasons for this could be: - The SDK could be doing multiple network calls. - The query might be taking a long time to retrieve the documents.
data-lake-analytics Data Lake Tools For Vscode Local Run And Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-tools-for-vscode-local-run-and-debug.md
Follow steps below to perform local debug:
## Next steps * [Use the Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md)
-* [Develop U-SQL with Python, R, and CSharp for Azure Data Lake Analytics in VSCode](data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md)
+* [Develop U-SQL with Python, R, and C# for Azure Data Lake Analytics in VSCode](data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md)
* [Get started with Data Lake Analytics using PowerShell](data-lake-analytics-get-started-powershell.md) * [Get started with Data Lake Analytics using the Azure portal](data-lake-analytics-get-started-portal.md) * [Use Data Lake Tools for Visual Studio for developing U-SQL applications](data-lake-analytics-data-lake-tools-get-started.md)
data-share Share Your Data Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-bicep.md
+
+ Title: 'Share outside your org (Bicep) - Azure Data Share quickstart'
+description: Learn how to share data with customers and partners using Azure Data Share and Bicep.
++++ Last updated : 04/04/2022+++
+# Quickstart: Share data using Azure Data Share and Bicep
+
+Learn how to set up a new Azure Data Share from an Azure storage account using Bicep, and start sharing your data with customers and partners outside of your Azure organization. For a list of the supported data stores, see [Supported data stores in Azure Data Share](./supported-data-stores.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-share-share-storage-account/).
++
+The following resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts):
+* [Microsoft.Storage/storageAccounts/blobServices/containers](/azure/templates/microsoft.storage/storageaccounts/blobservices/containers)
+* [Microsoft.DataShare/accounts](/azure/templates/microsoft.datashare/accounts)
+* [Microsoft.DataShare/accounts/shares](/azure/templates/microsoft.datashare/accounts/shares)
+* [Microsoft.Storage/storageAccounts/providers/roleAssignments](/azure/templates/microsoft.authorization/roleassignments)
+* [Microsoft.DataShare/accounts/shares/dataSets](/azure/templates/microsoft.datashare/accounts/shares/datasets)
+* [Microsoft.DataShare/accounts/shares/invitations](/azure/templates/microsoft.datashare/accounts/shares/invitations)
+* [Microsoft.DataShare/accounts/shares/synchronizationSettings](/azure/templates/microsoft.datashare/accounts/shares/synchronizationsettings)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters projectName=<project-name> invitationEmail=<invitation-email>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -projectName "<project-name>" -invitationEmail "<invitation-email>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<project-name\>** with a project name. The project name will be used to generate resource names. Replace **\<invitation-email\>** with an email address for receiving data share invitations.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Azure data share and invite recipients. To learn more about how a data consumer can accept and receive a data share, continue to the [accept and receive data](subscribe-to-data-share.md) tutorial.
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
+
+ Title: Create and enable an Azure DDoS Protection plan using Bicep.
+description: Learn how to create and enable an Azure DDoS Protection plan using Bicep.
+
+documentationcenter: na
+++
+ na
+++ Last updated : 04/04/2022++
+# Quickstart: Create an Azure DDoS Protection Standard using Bicep
+
+This quickstart describes how to use Bicep to create a distributed denial of service (DDoS) protection plan and virtual network (VNet), then enable the protection plan for the VNet. An Azure DDoS Protection Standard plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-and-enable-ddos-protection-plans).
++
+The Bicep file defines two resources:
+
+- [Microsoft.Network/ddosProtectionPlans](/azure/templates/microsoft.network/ddosprotectionplans)
+- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
+
+## Deploy the Bicep file
+
+In this example, the Bicep file creates a new resource group, a DDoS protection plan, and a VNet.
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters ddosProtectionPlanName=<plan-name> virtualNetworkName=<network-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -ddosProtectionPlanName "<plan-name>" -virtualNetworkName "<network-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<plan-name\>** with a DDoS protection plan name. Replace **\<network-name\>** with a DDoS virtual network name.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials.
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS protection telemetry](telemetry.md)
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
Title: Adaptive application controls in Microsoft Defender for Cloud description: This document helps you use adaptive application control in Microsoft Defender for Cloud to create an allowlist of applications running for Azure machines.++ Last updated 11/09/2021
No enforcement options are currently available. Adaptive application controls ar
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Supported machines:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure and non-Azure machines running Windows and Linux<br>:::image type="icon" source="./media/icons/yes-icon.png"::: [Azure Arc](../azure-arc/index.yml) machines| |Required roles and permissions:|**Security Reader** and **Reader** roles can both view groups and the lists of known-safe applications<br>**Contributor** and **Security Admin** roles can both edit groups and the lists of known-safe applications| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
Select the recommendation, or open the adaptive application controls page to vie
- AppLocker is not available (Windows Server Core installations) > [!TIP]
- > Defender for Cloud needs at least two weeks of data to define the unique recommendations per group of machines. Machines that have recently been created, or which belong to subscriptions that were only recently protected by Microsoft Defender for servers, will appear under the **No recommendation** tab.
+ > Defender for Cloud needs at least two weeks of data to define the unique recommendations per group of machines. Machines that have recently been created, or which belong to subscriptions that were only recently protected by Microsoft Defender for Servers, will appear under the **No recommendation** tab.
1. Open the **Recommended** tab. The groups of machines with recommended allowlists appears.
Some of the functions that are available from the REST API:
No enforcement options are currently available. Adaptive application controls are intended to provide **security alerts** if any application runs other than the ones you've defined as safe. They have a range of benefits ([What are the benefits of adaptive application controls?](#what-are-the-benefits-of-adaptive-application-controls)) and are extremely customizable as shown on this page. ### Why do I see a Qualys app in my recommended applications?
-[Microsoft Defender for servers](defender-for-servers-introduction.md) includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. For details of this scanner and instructions for how to deploy it, see [Defender for Cloud's integrated Qualys vulnerability assessment solution](deploy-vulnerability-assessment-vm.md).
+[Microsoft Defender for Servers](defender-for-servers-introduction.md) includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. For details of this scanner and instructions for how to deploy it, see [Defender for Cloud's integrated Qualys vulnerability assessment solution](deploy-vulnerability-assessment-vm.md).
To ensure no alerts are generated when Defender for Cloud deploys the scanner, the adaptive application controls recommended allowlist includes the scanner for all machines.
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
Title: Adaptive network hardening in Microsoft Defender for Cloud | Microsoft Docs description: Learn how to use actual traffic patterns to harden your network security groups (NSG) rules and further improve your security posture.-- ++ Last updated 11/09/2021 # Improve your network security posture with adaptive network hardening
This page explains how to configure and manage adaptive network hardening in Def
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Required roles and permissions:|Write permissions on the machineΓÇÖs NSGs| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
For example, let's say the existing NSG rule is to allow traffic from 140.20.30.
* **Unscanned resources**: VMs that the adaptive network hardening algorithm cannot be run on because of one of the following reasons: * **VMs are Classic VMs**: Only Azure Resource Manager VMs are supported. * **Not enough data is available**: In order to generate accurate traffic hardening recommendations, Defender for Cloud requires at least 30 days of traffic data.
- * **VM is not protected by Microsoft Defender for servers**: Only VMs protected with [Microsoft Defender for servers](defender-for-servers-introduction.md) are eligible for this feature.
+ * **VM is not protected by Microsoft Defender for Servers**: Only VMs protected with [Microsoft Defender for Servers](defender-for-servers-introduction.md) are eligible for this feature.
:::image type="content" source="./media/adaptive-network-hardening/recommendation-details-page.png" alt-text="Details page of the recommendation Adaptive Network Hardening recommendations should be applied on internet facing virtual machines.":::
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
## Access a software inventory
-If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for servers, you'll have access to the software inventory.
+If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll have access to the software inventory.
:::image type="content" source="media/asset-inventory/software-inventory-filters.gif" alt-text="If you've enabled the threat and vulnerability solution, Defender for Cloud's asset inventory offers a filter to select resources by their installed software."::: > [!NOTE]
-> The "Blank" option shows machines without Microsoft Defender for Endpoint (or without Microsoft Defender for servers).
+> The "Blank" option shows machines without Microsoft Defender for Endpoint (or without Microsoft Defender for Servers).
As well as the filters in the asset inventory page, you can explore the software inventory data from Azure Resource Graph Explorer.
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Defender for Cloud collects data from your machines using agents and extensions.
To assess your machines for vulnerabilities, you can use one of the following solutions: -- Microsoft's threat and vulnerability management module of Microsoft Defender for Endpoint (included with Microsoft Defender for servers)-- An integrated Qualys agent (included with Microsoft Defender for servers)
+- Microsoft's threat and vulnerability management module of Microsoft Defender for Endpoint (included with Microsoft Defender for Servers)
+- An integrated Qualys agent (included with Microsoft Defender for Servers)
- A Qualys or Rapid7 scanner which you have licensed separately and configured within Defender for Cloud (this is called the Bring Your Own License, or BYOL, scenario) > [!NOTE]
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
zone_pivot_groups: manage-asc-initiatives
To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards.
-With this feature, you can add your own *custom* initiatives. Although custom initiatives are not included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They are also shonw with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
+With this feature, you can add your own *custom* initiatives. Although custom initiatives are not included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They are also shown with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
As discussed in [the Azure Policy documentation](../governance/policy/concepts/definition-structure.md#definition-location), when you specify a location for your custom initiative, it must be a management group or a subscription.
This example shows you how to assign the built-in Defender for Cloud initiative
``` This example shows you how to assign a custom Defender for Cloud initiative on a subscription or management group:+
+> [!NOTE]
+> Make sure you include `"ASC":"true"` in the request body as shown here. The `ASC` field onboards the initiative to Microsoft Defender for Cloud.
- ```
- PUT
- PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policySetDefinitions/{policySetDefinitionName}?api-version=2021-06-01
+```
+ PUT
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policySetDefinitions/{policySetDefinitionName}?api-version=2021-06-01
- Request Body (JSON)
+ Request Body (JSON)
- {
- "properties": {
- "displayName": "Cost Management",
- "description": "Policies to enforce low cost storage SKUs",
- "metadata": {
- "category": "Cost Management"
- "ASC":"true"
- },
- "parameters": {
- "namePrefix": {
- "type": "String",
- "defaultValue": "myPrefix",
- "metadata": {
- "displayName": "Prefix to enforce on resource names"
+ {
+ "properties": {
+ "displayName": "Cost Management",
+ "description": "Policies to enforce low cost storage SKUs",
+ "metadata": {
+ "category": "Cost Management"
+ "ASC":"true"
+ },
+ "parameters": {
+ "namePrefix": {
+ "type": "String",
+ "defaultValue": "myPrefix",
+ "metadata": {
+ "displayName": "Prefix to enforce on resource names"
+ }
+ }
+ },
+ "policyDefinitions": [
+ {
+ "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/7433c107-6db4-4ad1-b57a-a76dce0154a1",
+ "policyDefinitionReferenceId": "Limit_Skus",
+ "parameters": {
+ "listOfAllowedSKUs": {
+ "value": [
+ "Standard_GRS",
+ "Standard_LRS"
+ ]
} } },
- "policyDefinitions": [
- {
- "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/7433c107-6db4-4ad1-b57a-a76dce0154a1",
- "policyDefinitionReferenceId": "Limit_Skus",
- "parameters": {
- "listOfAllowedSKUs": {
- "value": [
- "Standard_GRS",
- "Standard_LRS"
- ]
- }
- }
- },
- {
- "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
- "policyDefinitionReferenceId": "Resource_Naming",
- "parameters": {
- "prefix": {
- "value": "[parameters('namePrefix')]"
- },
- "suffix": {
- "value": "-LC"
- }
+ {
+ "policyDefinitionId": "/subscriptions/ae640e6b-ba3e-4256-9d62-2993eecfa6f2/providers/Microsoft.Authorization/policyDefinitions/ResourceNaming",
+ "policyDefinitionReferenceId": "Resource_Naming",
+ "parameters": {
+ "prefix": {
+ "value": "[parameters('namePrefix')]"
+ },
+ "suffix": {
+ "value": "-LC"
} }
- ]
- }
+ }
+ ]
}
- ```
+ }
+```
This example shows you how to remove an assignment:
- ```
- DELETE
- https://management.azure.com/{scope}/providers/Microsoft.Authorization/policyAssignments/{policyAssignmentName}?api-version=2018-05-01
- ```
+```
+ DELETE
+ https://management.azure.com/{scope}/providers/Microsoft.Authorization/policyAssignments/{policyAssignmentName}?api-version=2018-05-01
+```
::: zone-end
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
For example, if you've [connected an Amazon Web Services (AWS) account](quicksta
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quickstart-onboard-gcp.md) accounts to Microsoft Defender for Cloud.
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quic
Defender for Cloud includes vulnerability assessment solutions for your virtual machines, container registries, and SQL servers as part of the enhanced security features. Some of the scanners are powered by Qualys. But you don't need a Qualys license, or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.
-Microsoft Defender for servers includes automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you'll have access to the vulnerability findings from **Microsoft threat and vulnerability management**. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you'll have access to the vulnerability findings from **Microsoft threat and vulnerability management**. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
Review the findings from these vulnerability scanners and respond to them all from within Defender for Cloud. This broad approach brings Defender for Cloud closer to being the single pane of glass for all of your cloud security efforts.
Defender for Cloud provides:
The **Defender plans** page of Microsoft Defender for Cloud offers the following plans for comprehensive defenses for the compute, data, and service layers of your environment: -- [Microsoft Defender for servers](defender-for-servers-introduction.md)
+- [Microsoft Defender for Servers](defender-for-servers-introduction.md)
- [Microsoft Defender for Storage](defender-for-storage-introduction.md) - [Microsoft Defender for SQL](defender-for-sql-introduction.md) - [Microsoft Defender for Containers](defender-for-containers-introduction.md)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Defender for Containers protects your clusters whether they're running in:
- **Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure. -- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS.-
+- **Other Kubernetes distributions** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. For more information, see the **On-prem/IaaS (Arc)** section of [Supported features by environment](supported-machines-endpoint-solutions-clouds-containers.md#supported-features-by-environment).
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md).
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Defender for Cloud provides real-time threat protection for your Azure Kubernetes Service (AKS) containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the analysis of the Kubernetes audit logs.
-Host-level threat detection for your Linux AKS nodes is available if you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent is not currently supported.
+Host-level threat detection for your Linux AKS nodes is available if you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set, the Log Analytics agent is not currently supported.
## Availability
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for servers - the benefits and features
-description: Learn about the benefits and features of Microsoft Defender for servers.
Previously updated : 03/08/2022
+ Title: Microsoft Defender for Servers - the benefits and features
+description: Learn about the benefits and features of Microsoft Defender for Servers.
Last updated : 03/28/2022
-# Introduction to Microsoft Defender for servers
+# Introduction to Microsoft Defender for Servers
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, on-premises, or in a multi-cloud environment.
+Microsoft Defender for Servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, and on-premises environment.
To protect machines in hybrid and multi-cloud environments, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). Connect your hybrid and multi-cloud machines as explained in the relevant quickstart: - [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md) - [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) > [!TIP]
-> For details of which Defender for servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
+> For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
-## What are the benefits of Microsoft Defender for servers?
+## What are the Microsoft Defender for server plans?
-The threat detection and protection capabilities provided with Microsoft Defender for servers include:
+Microsoft Defender for Servers provides threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, or on-premises. Microsoft Defender for Servers is available in two plans:
-- **Integrated license for Microsoft Defender for Endpoint** - Microsoft Defender for servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. For more information, see [Protect your endpoints](integration-defender-for-endpoint.md).
+- **Microsoft Defender for Servers Plan 1** - deploys Microsoft Defender for Endpoint to your servers with these additional capabilities:
+ - Microsoft Defender for Endpoint licenses are charged per hour instead of per seat, lowering costs for protecting virtual machines only when they are in use.
+ - Microsoft Defender for Endpoint is deployed automatically to all cloud workloads so that you know they are protected when they spin up.
+ - Alerts and vulnerability data from Microsoft Defender for Endpoint is shown in Microsoft Defender for Cloud
- When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. Learn more about Microsoft Defender for Endpoint.
+- **Microsoft Defender for Servers Plan 2** (formerly Defender for Servers) - includes the benefits of Plan 1 and support for all of the other Microsoft Defender for Servers features.
- > [!IMPORTANT]
- > Defender for CloudΓÇÖs integration with Microsoft Defender for Endpoint is enabled by default. So when you enable Microsoft Defender for servers, you give consent for Defender for Cloud to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
- >
- > Learn more in [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-- **Vulnerability assessment tools for machines** - Microsoft Defender for servers includes a choice of vulnerability discovery and management tools for your machines. From Defender for Cloud's settings pages, you can select which of these tools to deploy to your machines and the discovered vulnerabilities will be shown in a security recommendation.
+To enable the Microsoft Defender for Servers plans:
+
+1. Go to **Environment settings** and select your subscription.
+2. If Microsoft Defender for Servers is not enabled, set it to **On**.
+ Plan 2 is selected by default.
+
+ If you want to change the Defender for server plan:
+ 1. In the **Plan/Pricing** column, click **configure**.
+ 2. Select the plan that you want.
+
+The following table describes what's included in each plan at a high level.
+
+| Feature | Defender for Servers Plan 1 | Defender for Servers Plan 2 |
+|:|::|::|
+| Automatic onboarding for resources in Azure, AWS, GCP | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Microsoft threat and vulnerability management | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Log-analytics (500MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Security Policy & Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Vulnerability Assessment using Qualys | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Adaptive application controls | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| File integrity monitoring | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Just-in time VM access | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Adaptive Network Hardening | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+<!-- | Future ΓÇô TVM P2 | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Future ΓÇô disk scanning insights | | :::image type="icon" source="./media/icons/yes-icon.png"::: | -->
+
+## What are the benefits of Defender for Servers?
+
+The threat detection and protection capabilities provided with Microsoft Defender for Servers include:
+
+- **Integrated license for Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. When you enable Microsoft Defender for Servers, you give consent for Defender for Cloud to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
+
+ When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. For more information, see [Protect your endpoints](integration-defender-for-endpoint.md).
+
+- **Vulnerability assessment tools for machines** - Microsoft Defender for Servers includes a choice of vulnerability discovery and management tools for your machines. From Defender for Cloud's settings pages, you can select which of these tools to deploy to your machines and the discovered vulnerabilities will be shown in a security recommendation.
- **Microsoft threat and vulnerability management** - Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, and without the need of additional agents or periodic scans. [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) prioritizes vulnerabilities based on the threat landscape, detections in your organization, sensitive information on vulnerable devices, and business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
- - **Vulnerability scanner powered by Qualys** - Qualys' scanner is one of the leading tools for real-time identification of vulnerabilities in your Azure and hybrid virtual machines. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. Learn more in [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
+ - **Vulnerability scanner powered by Qualys** - The Qualys scanner is one of the leading tools for real-time identification of vulnerabilities in your Azure and hybrid virtual machines. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. Learn more in [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
- **Just-in-time (JIT) virtual machine (VM) access** - Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.
- When you enable Microsoft Defender for servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. For more information, see [Understanding JIT VM access](just-in-time-access-overview.md).
+ When you enable Microsoft Defender for Servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. For more information, see [Understanding JIT VM access](just-in-time-access-overview.md).
- **File integrity monitoring (FIM)** - File integrity monitoring (FIM), also known as change monitoring, examines files and registries of operating system, application software, and others for changes that might indicate an attack. A comparison method is used to determine if the current state of the file is different from the last scan of the file. You can use this comparison to determine if valid or suspicious modifications have been made to your files.
- When you enable Microsoft Defender for servers, you can use FIM to validate the integrity of Windows files, your Windows registries, and Linux files. For more information, see [File integrity monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
+ When you enable Microsoft Defender for Servers, you can use FIM to validate the integrity of Windows files, your Windows registries, and Linux files. For more information, see [File integrity monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
- **Adaptive application controls (AAC)** - Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
The threat detection and protection capabilities provided with Microsoft Defende
For a list of the Linux alerts, see the [Reference table of alerts](alerts-reference.md#alerts-linux).
-## How does Defender for servers collect data?
+## How does Defender for Servers collect data?
For Windows, Microsoft Defender for Cloud integrates with Azure services to monitor and protect your Windows-based machines. Defender for Cloud presents the alerts and remediation suggestions from all of these services in an easy-to-use format.
You can simulate alerts by downloading one of the following playbooks:
## Next steps
-In this article, you learned about Microsoft Defender for servers.
+In this article, you learned about Microsoft Defender for Servers.
> [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Last updated 11/09/2021
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-If you've enabled **Microsoft Defender for servers**, you're able to use Microsoft Defender for Cloud's built-in vulnerability assessment tool as described in [Integrated Qualys vulnerability scanner for virtual machines](./deploy-vulnerability-assessment-vm.md). This tool is integrated into Defender for Cloud and doesn't require any external licenses - everything's handled seamlessly inside Defender for Cloud. In addition, the integrated scanner supports Azure Arc-enabled machines.
+If you've enabled **Microsoft Defender for Servers**, you're able to use Microsoft Defender for Cloud's built-in vulnerability assessment tool as described in [Integrated Qualys vulnerability scanner for virtual machines](./deploy-vulnerability-assessment-vm.md). This tool is integrated into Defender for Cloud and doesn't require any external licenses - everything's handled seamlessly inside Defender for Cloud. In addition, the integrated scanner supports Azure Arc-enabled machines.
Alternatively, you might want to deploy your own privately licensed vulnerability assessment solution from [Qualys](https://www.qualys.com/lp/azure) or [Rapid7](https://www.rapid7.com/products/insightvm/). You can install one of these partner solutions on multiple VMs belonging to the same subscription (but not to Azure Arc-enabled machines).
Supported solutions report vulnerability data to the partner's management platfo
> Depending on your configuration, you might only see a subset of this list. > > - If you haven't got a third-party vulnerability scanner configured, you won't be offered the opportunity to deploy it.
- > - If your selected VMs aren't protected by Microsoft Defender for servers, the Defender for Cloud integrated vulnerability scanner option will be unavailable.
+ > - If your selected VMs aren't protected by Microsoft Defender for Servers, the Defender for Cloud integrated vulnerability scanner option will be unavailable.
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-remediation-options.png" alt-text="The options for which type of remediation flow you want to choose when responding to the recommendation **A vulnerability assessment solution should be enabled on your virtual machines** recommendation page":::
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
description: Enable, deploy, and use Microsoft Defender for Endpoint's threat an
Previously updated : 03/06/2022 Last updated : 03/23/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
For a quick overview of threat and vulnerability management, watch this video:
|-|:-| |Release state:|General availability (GA)| |Machine types:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines <br> [Supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Prerequisites:|Enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)| |Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multi-cloud machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines-- ++ Last updated 11/16/2021 # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines
Deploy the vulnerability assessment solution that best meets your needs and bud
|-|:-| |Release state:|General availability (GA)| |Machine types (hybrid scenarios):|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts| ## Overview of the integrated vulnerability scanner
-The vulnerability scanner included with Microsoft Defender for Cloud is powered by Qualys. Qualys' scanner is one of the leading tools for real-time identification of vulnerabilities. It's only available with [Microsoft Defender for servers](defender-for-servers-introduction.md). You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.
+The vulnerability scanner included with Microsoft Defender for Cloud is powered by Qualys. Qualys' scanner is one of the leading tools for real-time identification of vulnerabilities. It's only available with [Microsoft Defender for Servers](defender-for-servers-introduction.md). You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.
### How the integrated vulnerability scanner works
The vulnerability scanner extension works as follows:
> Depending on your configuration, this list might appear differently. > > - If you haven't got a third-party vulnerability scanner configured, you won't be offered the opportunity to deploy it.
- > - If your selected machines aren't protected by Microsoft Defender for servers, the Defender for Cloud integrated vulnerability scanner option won't be available.
+ > - If your selected machines aren't protected by Microsoft Defender for Servers, the Defender for Cloud integrated vulnerability scanner option won't be available.
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-remediation-options-builtin.png" alt-text="The options for which type of remediation flow you want to choose when responding to the recommendation ** Machines should have a vulnerability assessment solution** recommendation page":::
The following commands trigger an on-demand scan:
## FAQ - Integrated vulnerability scanner (powered by Qualys) ### Are there any additional charges for the Qualys license?
-No. The built-in scanner is free to all Microsoft Defender for servers users. The recommendation deploys the scanner with its licensing and configuration information. No additional licenses are required.
+No. The built-in scanner is free to all Microsoft Defender for Servers users. The recommendation deploys the scanner with its licensing and configuration information. No additional licenses are required.
### What prerequisites and permissions are required to install the Qualys extension? You'll need write permissions for any machine on which you want to deploy the extension.
If you have machines in the **not applicable** resources group, it means Defende
Your machine might be in this tab because: -- It's not protected by Defender for Cloud - As explained above, the vulnerability scanner included with Microsoft Defender for Cloud is only available for machines protected by [Microsoft Defender for servers](defender-for-servers-introduction.md).
+- It's not protected by Defender for Cloud - As explained above, the vulnerability scanner included with Microsoft Defender for Cloud is only available for machines protected by [Microsoft Defender for Servers](defender-for-servers-introduction.md).
- It's an image in an AKS cluster or part of a virtual machine scale set - This extension doesn't support VMs that are PaaS resources.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
This table shows the availability details for the auto provisioning **feature**
| Aspect | Azure virtual machines | Azure Arc-enabled machines | ||:|:--| | Release state: | Generally available (GA) | Preview |
-| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) | | Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines | | Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
This table shows the availability details for the auto provisioning **feature**
| Aspect | Details | ||:--| | Release state: | Generally available (GA) |
-| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) | | Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines | | Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
This table shows the availability details for the auto provisioning **feature**
| Aspect | Linux | Windows | ||:--|:-| | Release state: | Generally available (GA) | Generally available (GA) |
-| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Relevant Defender plan: | [Microsoft Defender for Servers](defender-for-servers-introduction.md) | [Microsoft Defender for Servers](defender-for-servers-introduction.md) |
| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | | Supported destinations: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 | | Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/no-icon.png"::: No |
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
A free 30-day trial is available. For pricing details in your local currency or
## Enable enhanced security features from the Azure portal
-To enable all Defender for Cloud features including threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads. Enabling it at the workspace level doesn't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources. In addition, the only Microsoft Defender plans available at the workspace level are Microsoft Defender for servers and Microsoft Defender for SQL servers on machines.
+To enable all Defender for Cloud features including threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads. Enabling it at the workspace level doesn't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources. In addition, the only Microsoft Defender plans available at the workspace level are Microsoft Defender for Servers and Microsoft Defender for SQL servers on machines.
- You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level - You can enable **Microsoft Defender for SQL** at either the subscription level or resource level
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Defender for Cloud is offered in two modes:
- **Defender for Cloud with all enhanced security features** - Enabling enhanced security extends the capabilities of the free mode to workloads running in private and other public clouds, providing unified security management and threat protection across your hybrid cloud workloads. Some of the major benefits include:
- - **Microsoft Defender for Endpoint** - Microsoft Defender for servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) for comprehensive endpoint detection and response (EDR). Learn more about the benefits of using Microsoft Defender for Endpoint together with Defender for Cloud in [Use Defender for Cloud's integrated EDR solution](integration-defender-for-endpoint.md).
+ - **Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) for comprehensive endpoint detection and response (EDR). Learn more about the benefits of using Microsoft Defender for Endpoint together with Defender for Cloud in [Use Defender for Cloud's integrated EDR solution](integration-defender-for-endpoint.md).
- **Vulnerability assessment for virtual machines, container registries, and SQL resources** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud. - **Multi-cloud security** - Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features. - **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions.
Defender for Cloud is offered in two modes:
## FAQ - Pricing and billing -- [How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?](#how-can-i-track-who-in-my-organization-enabled-a-microsoft-defender-plan-in-defender-for-cloud)-- [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud)-- [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription)-- [Can I enable Microsoft Defender for servers on a subset of servers in my subscription?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers-in-my-subscription)-- [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)-- [My subscription has Microsoft Defender for servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)-- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)-- [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)-- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)-- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)-- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+- [Microsoft Defender for Cloud's enhanced security features](#microsoft-defender-for-clouds-enhanced-security-features)
+ - [What are the benefits of enabling enhanced security features?](#what-are-the-benefits-of-enabling-enhanced-security-features)
+ - [FAQ - Pricing and billing](#faqpricing-and-billing)
+ - [How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?](#how-can-i-track-who-in-my-organization-enabled-a-microsoft-defender-plan-in-defender-for-cloud)
+ - [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud)
+ - [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription)
+ - [Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers-in-my-subscription)
+ - [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
+ - [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)
+ - [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed)
+ - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)
+ - [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
+ - [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)
+ - [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+ - [Next steps](#next-steps)
### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?
You can use any of the following ways to enable enhanced security for your subsc
| Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
-### Can I enable Microsoft Defender for servers on a subset of servers in my subscription?
-No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for servers.
+### Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?
+No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for Servers.
-An alternative is to enable Microsoft Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
+An alternative is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
-### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for servers?
-If you've already got a license for **Microsoft Defender for Endpoint for Servers**, you won't have to pay for that part of your Microsoft Defender for servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
+### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?
+If you've already got a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace. The discount will be effective starting from the approval date, and won't take place retroactively.
-### My subscription has Microsoft Defender for servers enabled, do I pay for not-running servers?
-No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in the deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
+### My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?
+No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in the deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
| State | Description | Instance usage billed | |--|--|--|
No. When you enable [Microsoft Defender for servers](defender-for-servers-introd
:::image type="content" source="media/enhanced-security-features-overview/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine."::: ### Will I be charged for machines without the Log Analytics agent installed?
-Yes. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent. This is applicable for Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers.
+Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent. This is applicable for Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers.
### If a Log Analytics agent reports to multiple workspaces, will I be charged twice? Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll be charged for every workspace that has a 'Security' or 'AntiMalware' solution installed.
You'll get 500-MB free data ingestion per day, for every Windows machine connect
This data is a daily rate averaged across all nodes. So even if some machines send 100-MB and others send 800-MB, if the total doesn't exceed the **[number of machines] x 500-MB** free limit, you won't be charged extra. ### What data types are included in the 500-MB data daily allowance?
-Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for Windows machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
+Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for Windows machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):
- SecurityAlert - SecurityBaseline - SecurityBaselineSummary
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Title: File integrity monitoring in Microsoft Defender for Cloud description: Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for Cloud using this walkthrough.-- ++ Last updated 11/09/2021 # File integrity monitoring in Microsoft Defender for Cloud
Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
|Required roles and permissions:|**Workspace owner** can enable/disable FIM (for more information, see [Azure Roles for Log Analytics](/services-hub/health/azure-roles#azure-roles)).<br>**Reader** can view results.| |Clouds:|:::image type="icon" source="./medi).<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
FIM is only available from Defender for Cloud's pages in the Azure portal. There
- Access and view the status and settings of each workspace
- - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon Indicates that the workspace or subscription isn't protected with Microsoft Defender for servers. To use the FIM features, your subscription must be protected with this plan. For more information, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+ - ![Upgrade plan icon.][4] Upgrade the workspace to use enhanced security features. This icon Indicates that the workspace or subscription isn't protected with Microsoft Defender for Servers. To use the FIM features, your subscription must be protected with this plan. For more information, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
- ![Enable icon][3] Enable FIM on all machines under the workspace and configure the FIM options. This icon indicates that FIM is not enabled for the workspace.
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
Title: Use Microsoft Defender for Cloud to harden your Docker hosts and protect the containers description: How-to protect your Docker hosts and verify they're compliant with the CIS Docker benchmark-- ++ Last updated 11/09/2021 # Harden your Docker hosts
When vulnerabilities are found, they're grouped inside a single recommendation.
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Required roles and permissions:|**Reader** on the workspace to which the host connects| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
When vulnerabilities are found, they're grouped inside a single recommendation.
The recommendation page shows the affected resources (Docker hosts).
- :::image type="content" source="./media/monitor-container-security/docker-host-vulnerabilities-found.png" alt-text="Recommendation to remediate vulnerabilities in container security configurations .":::
+ :::image type="content" source="./media/monitor-container-security/docker-host-vulnerabilities-found.png" alt-text="Recommendation to remediate vulnerabilities in container security configurations.":::
> [!NOTE] > Machines that aren't running Docker will be shown in the **Not applicable resources** tab. They'll appear in Azure Policy as Compliant.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
| Aspect | Details | |-|:--| | Release state: | General availability (GA) |
-| Pricing: | Requires [Microsoft Defender for servers](defender-for-servers-introduction.md) |
-| Supported environments: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) |
+| Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) |
+| Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) |
| Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts |
Confirm that your machine meets the necessary requirements for Defender for Endp
- **On-premises machines** - Connect your target machines to Azure Arc as explained in [Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md).
-1. Enable **Microsoft Defender for servers**. See [Quickstart: Enable Defender for Cloud's enhanced security features](enable-enhanced-security.md).
+1. Enable **Microsoft Defender for Servers**. See [Quickstart: Enable Defender for Cloud's enhanced security features](enable-enhanced-security.md).
> [!IMPORTANT]
- > Defender for Cloud's integration with Microsoft Defender for Endpoint is enabled by default. So when you enable enhanced security features, you give consent for Microsoft Defender for servers to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
+ > Defender for Cloud's integration with Microsoft Defender for Endpoint is enabled by default. So when you enable enhanced security features, you give consent for Microsoft Defender for Servers to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
1. If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
To remove the Defender for Endpoint solution from your machines:
- [What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?](#whats-this-mdewindows--mdelinux-extension-running-on-my-machine) - [What are the licensing requirements for Microsoft Defender for Endpoint?](#what-are-the-licensing-requirements-for-microsoft-defender-for-endpoint)-- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-microsoft-defender-for-servers)
+- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-microsoft-defender-for-servers)
- [How do I switch from a third-party EDR tool?](#how-do-i-switch-from-a-third-party-edr-tool) ### What's this "MDE.Windows" / "MDE.Linux" extension running on my machine?
If you've enabled the integration, but still don't see the extension running on
1. If 12 hours hasn't passed since you enabled the solution, you'll need to wait until the end of this period to be sure there's an issue to investigate. 1. After 12 hours have passed, if you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration.
-1. Ensure you've enabled the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating.
+1. Ensure you've enabled the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating.
1. If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). ### What are the licensing requirements for Microsoft Defender for Endpoint?
-Defender for Endpoint is included at no extra cost with **Microsoft Defender for servers**. Alternatively, it can be purchased separately for 50 machines or more.
+Defender for Endpoint is included at no extra cost with **Microsoft Defender for Servers**. Alternatively, it can be purchased separately for 50 machines or more.
-### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for servers?
-If you've already got a license for **Microsoft Defender for Endpoint for Servers** , you won't have to pay for that part of your Microsoft Defender for servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
+### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?
+If you've already got a license for **Microsoft Defender for Endpoint for Servers** , you won't have to pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace. The discount will be effective starting from the approval date, and won't take place retroactively.
+## Does Microsoft Defender for Servers support the new unified Microsoft Defender for Endpoint agent for Windows Server 2012 R2 and 2016?
+
+In October 2021, we released [a new Microsoft Defender for Endpoint solution stack](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292) to public preview for Windows Server 2012 R2 and 2016. The new solution stack does not use or require installation of the Microsoft Monitoring Agent (MMA).
+
+The new version of Microsoft Defender for Endpoint is deployed by Defender for Servers Plan 1 for Windows Server 2012 R2 and 2016.
+ ### How do I switch from a third-party EDR tool? Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines++ Last updated 11/09/2021
When Defender for Cloud finds a machine that can benefit from JIT, it adds that
### What permissions are needed to configure and use JIT?
-JIT requires [Microsoft Defender for servers](defender-for-servers-introduction.md) to be enabled on the subscription.
+JIT Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) to be enabled on the subscription.
**Reader** and **SecurityReader** roles can both view the JIT status and parameters.
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Also ensure your Log Analytics agent is [properly configured to send data to Def
To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds-containers.md). > [!NOTE]
-> Even though **Microsoft Defender for servers** is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+> Even though **Microsoft Defender for Servers** is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
## Managed virtual machine services <a name="virtual-machine"></a>
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
In the center of the page are the **feature tiles**, each linking to a high prof
- **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md). - **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md).-- **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
+- **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Azure Purview data sensitivity classifications. [Learn more](information-protection.md). ### Insights
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Currently, integrated security solutions include vulnerability assessment by [Qu
> [!NOTE] > Defender for Cloud does not install the Log Analytics agent on partner virtual appliances because most security vendors prohibit external agents running on their appliances.
-To learn more about the integration of vulnerability scanning tools from Qualys, including a built-in scanner available to customers who've enabled Microsoft Defender for servers, see [Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
+To learn more about the integration of vulnerability scanning tools from Qualys, including a built-in scanner available to customers who've enabled Microsoft Defender for Servers, see [Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
Defender for Cloud also offers vulnerability analysis for your:
defender-for-cloud Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prevent-misconfigurations.md
These recommendations can be used with the **enforce** option:
- Microsoft Defender for Key Vault should be enabled - Microsoft Defender for Kubernetes should be enabled - Microsoft Defender for Resource Manager should be enabled-- Microsoft Defender for servers should be enabled
+- Microsoft Defender for Servers should be enabled
- Microsoft Defender for Azure SQL Database servers should be enabled - Microsoft Defender for SQL servers on machines should be enabled - Microsoft Defender for SQL should be enabled for unprotected Azure SQL servers
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
This article addresses recommendations that apply to your Azure resources from a
The **Networking** features of Defender for Cloud include: -- Network map (requires Microsoft Defender for servers)-- [Adaptive network hardening](adaptive-network-hardening.md) (requires Microsoft Defender for servers)
+- Network map (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans))
+- [Adaptive network hardening](adaptive-network-hardening.md) (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans))
- Networking security recommendations ## View your networking resources and their recommendations
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud++ Last updated 03/27/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud).
+ - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud).
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|Aspect|Details| |-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). - The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region. -- **To enable the Defender for servers plan**, you'll need:
+- **To enable the Defender for Servers plan**, you'll need:
- - Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+ - Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
- An active AWS account, with EC2 instances.
Defender for Cloud will immediately start scanning your AWS resources and you'll
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud++ Last updated 03/27/2022 zone_pivot_groups: connect-gcp-accounts
To protect your GCP-based resources, you can connect an account in two different
- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources: - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
+ - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md). :::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
To protect your GCP-based resources, you can connect an account in two different
|Aspect|Details| |-|:-| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-|Pricing:|The **CSPM plan** is free.<br> The **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
+|Pricing:|The **CSPM plan** is free.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
|Required roles and permissions:| **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
Follow the steps below to create your GCP cloud connector.
1. (**Servers only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
- :::image type="content" source="media/quickstart-onboard-gcp/powershell-unique-id.png" alt-text="Screenshot showing the unique numeric id to be copied." lightbox="media/quickstart-onboard-gcp/powershell-unique-id-expanded.png":::
+ :::image type="content" source="media/quickstart-onboard-gcp/powershell-unique-id.png" alt-text="Screenshot showing the unique numeric I D to be copied." lightbox="media/quickstart-onboard-gcp/powershell-unique-id-expanded.png":::
To locate the unique numeric ID in the GCP portal, Navigate to **IAM & Admin** > **Service Accounts**, in the Name column, locate `Azure-Arc for servers onboarding` and copy the unique numeric ID number (OAuth 2 Client ID).
By default, all plans are toggled to `On`, on the plans select screen.
Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content.
-Microsoft Defender for servers brings threat detection and advanced defenses to your GCP VMs instances.
+Microsoft Defender for Servers brings threat detection and advanced defenses to your GCP VMs instances.
To have full visibility to Microsoft Defender for Servers security content, ensure you have the following requirements configured: -- Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+- Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
- Azure Arc for servers installed on your VM instances. - **(Recommended) Auto-provisioning** - Auto-provisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc auto-provisioning process is using the OS config agent on GCP end. Learn more about the [OS config agent availability on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager).
To have full visibility to Microsoft Defender for Servers security content, ensu
### Configure the Containers plan
-Microsoft Defender for Containers brings threat detection, and advanced defences to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers, and to fully protect GCP clusters, ensure you have the following requirements configured:
+Microsoft Defender for Containers brings threat detection, and advanced defenses to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers, and to fully protect GCP clusters, ensure you have the following requirements configured:
- **Kubernetes audit logs to Defender for Cloud** - Enabled by default. This configuration is available at a GCP Project level only. This provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud backend for further analysis. - **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension** - Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in 3 different ways:
Microsoft Defender for Containers brings threat detection, and advanced defences
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
|Required roles and permissions:|**Owner** or **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier.++ Previously updated : 03/14/2022 Last updated : 04/04/2022 # Archive for what's new in Defender for Cloud?
Learn more about using these recommendations in [Harden a machine's OS configura
Updates in August include: -- [Microsoft Defender for Endpoint for Linux now supported by Azure Defender for servers (in preview)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-azure-defender-for-servers-in-preview)
+- [Microsoft Defender for Endpoint for Linux now supported by Azure Defender for Servers (in preview)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-azure-defender-for-servers-in-preview)
- [Two new recommendations for managing endpoint protection solutions (in preview)](#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview) - [Built-in troubleshooting and guidance for solving common issues](#built-in-troubleshooting-and-guidance-for-solving-common-issues) - [Regulatory compliance dashboard's Azure Audit reports released for general availability (GA)](#regulatory-compliance-dashboards-azure-audit-reports-released-for-general-availability-ga)
Updates in August include:
- [CSV exports of recommendation data now limited to 20 MB](#csv-exports-of-recommendation-data-now-limited-to-20-mb) - [Recommendations page now includes multiple views](#recommendations-page-now-includes-multiple-views)
-### Microsoft Defender for Endpoint for Linux now supported by Azure Defender for servers (in preview)
+### Microsoft Defender for Endpoint for Linux now supported by Azure Defender for Servers (in preview)
-[Azure Defender for servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
+[Azure Defender for Servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Security Center. From Security Center, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack.
Learn more in [Connect Azure Defender alerts from Azure Security Center](../sent
The alerts listed below were provided as part of the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) plan.
-As part of a logical reorganization of some of the Azure Defender plans, we've moved some alerts from **Azure Defender for Resource Manager** to **Azure Defender for servers**.
+As part of a logical reorganization of some of the Azure Defender plans, we've moved some alerts from **Azure Defender for Resource Manager** to **Azure Defender for Servers**.
The alerts are organized according to two main principles: - Alerts that provide control-plane protection - across many Azure resource types - are part of Azure Defender for Resource Manager - Alerts that protect specific workloads are in the Azure Defender plan that relates to the corresponding workload
-These are the alerts that were part of Azure Defender for Resource Manager, and which, as a result of this change, are now part of Azure Defender for servers:
+These are the alerts that were part of Azure Defender for Resource Manager, and which, as a result of this change, are now part of Azure Defender for Servers:
- ARM_AmBroadFilesExclusion - ARM_AmDisablementAndCodeExecution
These are the alerts that were part of Azure Defender for Resource Manager, and
- ARM_VMAccessUnusualPasswordReset - ARM_VMAccessUnusualSSHReset
-Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
+Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for Servers](defender-for-servers-introduction.md) plans.
### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
For more information, see:
### CI/CD vulnerability scanning of container images with GitHub workflows and Azure Defender (preview)
-Azure Defender for container registries now provides DevSecOps teams observability into GitHub Action workflows.
+Azure Defender for container registries now provides DevSecOps teams observability into GitHub Actions workflows.
The new vulnerability scanning feature for container images, utilizing Trivy, helps your developers scan for common vulnerabilities in their container images *before* pushing images to container registries.
Learn more in [Use Azure Defender for Kubernetes with your on-premises and multi
Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security solution. It provides risk-based vulnerability management and assessment as well as endpoint detection and response (EDR). For a full list of the benefits of using Defender for Endpoint together with Azure Security Center, see [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-When you enable Azure Defender for servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
+When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
Support has now been expanded to include Windows Server 2019 and Windows 10 on [Windows Virtual Desktop](../virtual-desktop/overview.md).
Learn more in [Workload protection best-practices using Kubernetes admission con
Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security solution. It provides risk-based vulnerability management and assessment as well as endpoint detection and response (EDR). For a full list of the benefits of using Defender for Endpoint together with Azure Security Center, see [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-When you enable Azure Defender for servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
+When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
Support has now been expanded to include Windows Server 2019 and Windows 10 on [Windows Virtual Desktop](../virtual-desktop/overview.md).
To learn more, see the following pages:
### Vulnerability assessment for on-premise and multi-cloud machines is released for general availability (GA)
-In October, we announced a preview for scanning Azure Arc-enabled servers with [Azure Defender for servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
+In October, we announced a preview for scanning Azure Arc-enabled servers with [Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys).
It's now released for general availability (GA). When you've enabled Azure Arc on your non-Azure machines, Security Center will offer to deploy the integrated vulnerability scanner on them - manually and at-scale.
-With this update, you can unleash the power of **Azure Defender for servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
+With this update, you can unleash the power of **Azure Defender for Servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
Main capabilities:
These tools have been enhanced and expanded in the following ways:
- **Support exporting secure score data.** -- **Regulatory compliance assessment data added (in preview).** You can now continuously export updates to regulatory compliance assessments, including for any custom initiatives, to a Log Analytics workspace or Event Hub. This feature is unavailable on national clouds.
+- **Regulatory compliance assessment data added (in preview).** You can now continuously export updates to regulatory compliance assessments, including for any custom initiatives, to a Log Analytics workspace or Event Hubs. This feature is unavailable on national clouds.
:::image type="content" source="media/release-notes/continuous-export-regulatory-compliance-option.png" alt-text="The options for including regulatory compliance assessment information with your continuous export data.":::
Updates in October include:
### Vulnerability assessment for on-premise and multi-cloud machines (preview)
-[Azure Defender for servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc-enabled servers.
+[Azure Defender for Servers](defender-for-servers-introduction.md)' integrated vulnerability assessment scanner (powered by Qualys) now scans Azure Arc-enabled servers.
When you've enabled Azure Arc on your non-Azure machines, Security Center will offer to deploy the integrated vulnerability scanner on them - manually and at-scale.
-With this update, you can unleash the power of **Azure Defender for servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
+With this update, you can unleash the power of **Azure Defender for Servers** to consolidate your vulnerability management program across all of your Azure and non-Azure assets.
Main capabilities:
Azure Resource Graph is a service in Azure that is designed to provide efficient
For Azure Security Center, you can use ARG and the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query a wide range of security posture data. For example: - Asset inventory utilizes (ARG)-- We have documented a sample ARG query for how to [Identify accounts without multifactor authentication (MFA) enabled](multi-factor-authentication-enforcement.md#identify-accounts-without-multi-factor-authentication-mfa-enabled)
+- We have documented a sample ARG query for how to [Identify accounts without multi-factor authentication (MFA) enabled](multi-factor-authentication-enforcement.md#identify-accounts-without-multi-factor-authentication-mfa-enabled)
Within ARG, there are tables of data for you to use in your queries.
Learn more about the [overview page](overview-page.md).
When you enable Azure Defender from the **Pricing and settings** area of Azure Security Center, the following Defender plans are all enabled simultaneously and provide comprehensive defenses for the compute, data, and service layers of your environment: -- [Azure Defender for servers](defender-for-servers-introduction.md)
+- [Azure Defender for Servers](defender-for-servers-introduction.md)
- [Azure Defender for App Service](defender-for-app-service-introduction.md) - [Azure Defender for Storage](defender-for-storage-introduction.md) - [Azure Defender for SQL](defender-for-sql-introduction.md)
The details page for recommendations now includes a freshness interval indicator
Updates in August include: - [Asset inventory - powerful new view of the security posture of your assets](#asset-inventorypowerful-new-view-of-the-security-posture-of-your-assets)-- [Added support for Azure Active Directory security defaults (for multifactor authentication)](#added-support-for-azure-active-directory-security-defaults-for-multifactor-authentication)
+- [Added support for Azure Active Directory security defaults (for multi-factor authentication)](#added-support-for-azure-active-directory-security-defaults-for-multi-factor-authentication)
- [Service principals recommendation added](#service-principals-recommendation-added) - [Vulnerability assessment on VMs - recommendations and policies consolidated](#vulnerability-assessment-on-vmsrecommendations-and-policies-consolidated) - [New AKS security policies added to ASC_default initiative ΓÇô for use by private preview customers only](#new-aks-security-policies-added-to-asc_default-initiative--for-use-by-private-preview-customers-only)
You can use the view and its filters to explore your security posture data and t
Learn more about [asset inventory](asset-inventory.md).
-### Added support for Azure Active Directory security defaults (for multifactor authentication)
+### Added support for Azure Active Directory security defaults (for multi-factor authentication)
Security Center has added full support for [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md), Microsoft's free identity security protections. Security defaults provide preconfigured identity security settings to defend your organization from common identity-related attacks. Security defaults already protecting more than 5 million tenants overall; 50,000 tenants are also protected by Security Center.
-Security Center now provides a security recommendation whenever it identifies an Azure subscription without security defaults enabled. Until now, Security Center recommended enabling multifactor authentication using conditional access, which is part of the Azure Active Directory (AD) premium license. For customers using Azure AD free, we now recommend enabling security defaults.
+Security Center now provides a security recommendation whenever it identifies an Azure subscription without security defaults enabled. Until now, Security Center recommended enabling multi-factor authentication using conditional access, which is part of the Azure Active Directory (AD) premium license. For customers using Azure AD free, we now recommend enabling security defaults.
Our goal is to encourage more customers to secure their cloud environments with MFA, and mitigate one of the highest risks that is also the most impactful to your [secure score](secure-score-security-controls.md).
The policy definitions can be found in Azure Policy:
|Goal |Policy |Policy ID | ||||
-|Continuous export to Event Hub|[Deploy export to Event Hub for Azure Security Center alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb|
+|Continuous export to Event Hubs|[Deploy export to Event Hubs for Azure Security Center alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb|
|Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Azure Security Center alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9| |Workflow automation for security alerts|[Deploy Workflow Automation for Azure Security Center alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e| |Workflow automation for security recommendations|[Deploy Workflow Automation for Azure Security Center recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef|
Learn more about [enhancing your custom recommendations with detailed informatio
### Crash dump analysis capabilities migrating to fileless attack detection
-We are integrating the Windows crash dump analysis (CDA) detection capabilities into [fileless attack detection](defender-for-servers-introduction.md#what-are-the-benefits-of-microsoft-defender-for-servers). Fileless attack detection analytics brings improved versions of the following security alerts for Windows machines: Code injection discovered, Masquerading Windows Module Detected, Shell code discovered, and Suspicious code segment detected.
+We are integrating the Windows crash dump analysis (CDA) detection capabilities into [fileless attack detection](defender-for-servers-introduction.md#what-are-the-benefits-of-defender-for-servers). Fileless attack detection analytics brings improved versions of the following security alerts for Windows machines: Code injection discovered, Masquerading Windows Module Detected, Shell code discovered, and Suspicious code segment detected.
Some of the benefits of this transition:
Security recommendations for identity and access on the Azure Security Center fr
Examples of identity and access recommendations include: -- "Multifactor authentication should be enabled on accounts with owner permissions on your subscription."
+- "Multi-factor authentication should be enabled on accounts with owner permissions on your subscription."
- "A maximum of three owners should be designated for your subscription." - "Deprecated accounts should be removed from your subscription."
If you have subscriptions on the free pricing tier, their secure scores will be
Learn more about [identity and access recommendations](recommendations-reference.md#recs-identityandaccess).
-Learn more about [Managing multifactor authentication (MFA) enforcement on your subscriptions](multi-factor-authentication-enforcement.md).
+Learn more about [Managing multi-factor authentication (MFA) enforcement on your subscriptions](multi-factor-authentication-enforcement.md).
Use Security Center to receive recommendations not only from Microsoft but also
### Advanced integrations with export of recommendations and alerts (preview)
-In order to enable enterprise level scenarios on top of Security Center, it's now possible to consume Security Center alerts and recommendations in additional places except the Azure portal or API. These can be directly exported to an Event Hub and to Log Analytics workspaces. Here are a few workflows you can create around these new capabilities:
+In order to enable enterprise level scenarios on top of Security Center, it's now possible to consume Security Center alerts and recommendations in additional places except the Azure portal or API. These can be directly exported to an event hub and to Log Analytics workspaces. Here are a few workflows you can create around these new capabilities:
- With export to Log Analytics workspace, you can create custom dashboards with Power BI.-- With export to Event Hub, you'll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution, or Azure Data Explorer.
+- With export to Event Hubs, you'll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution, or Azure Data Explorer.
### Onboard on-prem servers to Security Center from Windows Admin Center (preview)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## April 2022
+
+Updates in April include:
+
+[New Defender for Servers plans](#new-defender-for-servers-plans)
+
+### New Defender for Servers plans
+
+Microsoft Defender for Servers is now offered in two incremental plans.
+
+- Microsoft Defender for Servers Plan 2, formerly Defender for Servers
+- Microsoft Defender for Servers Plan 1, including support for Defender for Endpoint only
+
+While Microsoft Defender for Servers Plan 2 continues to provide complete protections from threats and vulnerabilities to your cloud and on-premises workloads, Microsoft Defender for Servers Plan 1 provides endpoint protection only, powered by Microsoft Defender for Endpoint and natively integrated with Defender for Cloud. Read more about the [Microsoft Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+
+If you have been using Defender for Servers until now ΓÇô no action is required.
+
+In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016 (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploy the legacy agent to Windows Server 2012 R2 and 2016 workloads, and will deploy the unified agent soon after it is approved for general use.
+ ## March 2022 Updates in March include:
All Microsoft Defenders for IoT device alerts are no longer visible in Microsoft
- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multi cloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multi-cloud enabled feature that allows you to manage your AWS resources alongside your Azure resources. -- **Microsoft Defender for servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for Servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
The new automated onboarding of GCP environments allows you to protect GCP workl
- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations, which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your resources across Azure, AWS, and GCP. -- **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
In addition, these two alerts from this plan have come out of preview:
### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
-To benefit from all of the security features available from [Microsoft Defender for servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
+To benefit from all of the security features available from [Microsoft Defender for Servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
The two recommendations, which both offer automated remediation (the 'Fix' actio
|Recommendation |Description |Severity | ||||
-|[Microsoft Defender for servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium |
-|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
Advance notice of this change appeared for the last six months in the [Important
The following alert was previously only available to organizations who had enabled the [Microsoft Defender for DNS](defender-for-dns-introduction.md) plan.
-With this update, the alert will also show for subscriptions with the [Microsoft Defender for servers](defender-for-servers-introduction.md) or [Defender for App Service](defender-for-app-service-introduction.md) plan enabled.
+With this update, the alert will also show for subscriptions with the [Microsoft Defender for Servers](defender-for-servers-introduction.md) or [Defender for App Service](defender-for-app-service-introduction.md) plan enabled.
In addition, [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) has expanded the list of known malicious domains to include domains associated with exploiting the widely publicized vulnerabilities associated with Log4j.
Our Ignite release includes:
Other changes in November include: - [Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solutionreleased-for-general-availability-ga)-- [Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for servers - released for general availability (GA)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-microsoft-defender-for-serversreleased-for-general-availability-ga)
+- [Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)](#microsoft-defender-for-endpoint-for-linux-now-supported-by-microsoft-defender-for-serversreleased-for-general-availability-ga)
- [Snapshot export for recommendations and security findings (in preview)](#snapshot-export-for-recommendations-and-security-findings-in-preview) - [Auto provisioning of vulnerability assessment solutions released for general availability (GA)](#auto-provisioning-of-vulnerability-assessment-solutions-released-for-general-availability-ga) - [Software inventory filters in asset inventory released for general availability (GA)](#software-inventory-filters-in-asset-inventory-released-for-general-availability-ga)
When you've added your AWS accounts, Defender for Cloud protects your AWS resour
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Kubernetes** extends its container threat detection and advanced defenses to your **Amazon EKS Linux clusters**.-- **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
Learn more in [Review your security recommendations](review-security-recommendat
### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution - released for general availability (GA)
-In October, [we announced](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solution-in-preview) an extension to the integration between [Microsoft Defender for servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt). This feature is now released for general availability (GA).
+In October, [we announced](#microsoft-threat-and-vulnerability-management-added-as-vulnerability-assessment-solution-in-preview) an extension to the integration between [Microsoft Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt). This feature is now released for general availability (GA).
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
To automatically surface the vulnerabilities, on existing and new machines, with
Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
-### Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for servers - released for general availability (GA)
+### Microsoft Defender for Endpoint for Linux now supported by Microsoft Defender for Servers - released for general availability (GA)
In August, [we announced](release-notes-archive.md#microsoft-defender-for-endpoint-for-linux-now-supported-by-azure-defender-for-servers-in-preview) preview support for deploying the [Defender for Endpoint for Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux) sensor to supported Linux machines. This feature is now released for general availability (GA).
-[Microsoft Defender for servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
+[Microsoft Defender for Servers](defender-for-servers-introduction.md) includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities.
When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack.
Even though the feature is called *continuous*, there's also an option to export
### Auto provisioning of vulnerability assessment solutions released for general availability (GA)
-In October, [we announced](#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview) the addition of vulnerability assessment solutions to Defender for Cloud's auto provisioning page. This is relevant to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for servers](defender-for-servers-introduction.md). This feature is now released for general availability (GA).
+In October, [we announced](#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview) the addition of vulnerability assessment solutions to Defender for Cloud's auto provisioning page. This is relevant to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for Servers](defender-for-servers-introduction.md). This feature is now released for general availability (GA).
If the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) is enabled, Defender for Cloud presents a choice of vulnerability assessment solutions:
Updates in October include:
### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution (in preview)
-We've extended the integration between [Azure Defender for servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
+We've extended the integration between [Azure Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's thr
### Vulnerability assessment solutions can now be auto enabled (in preview)
-Security Center's auto provisioning page now includes the option to automatically enable a vulnerability assessment solution to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for servers](defender-for-servers-introduction.md).
+Security Center's auto provisioning page now includes the option to automatically enable a vulnerability assessment solution to Azure virtual machines and Azure Arc machines on subscriptions protected by [Azure Defender for Servers](defender-for-servers-introduction.md).
If the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) is enabled, Defender for Cloud presents a choice of vulnerability assessment solutions:
For full details, including sample Kusto queries for Azure Resource Graph, see [
In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
-As part of a logical reorganization of some of the Azure Defender plans, we moved twenty-one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for servers](defender-for-servers-introduction.md).
+As part of a logical reorganization of some of the Azure Defender plans, we moved twenty-one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for Servers](defender-for-servers-introduction.md).
With this update, we've changed the prefixes of these alerts to match this reassignment and replaced "ARM_" with "VM_" as shown in the following table:
With this update, we've changed the prefixes of these alerts to match this reass
| ARM_VMAccessUnusualSSHReset | VM_VMAccessUnusualSSHReset |
-Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
+Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for Servers](defender-for-servers-introduction.md) plans.
### Changes to the logic of a security recommendation for Kubernetes clusters
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers | |
+| Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers Plan 2 | |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability assessment | Registry scan | - | - | - | - | - |
-| Vulnerability assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| EKS | Preview | Γ£ô | Agentless | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability assessment | Registry scan | - | - | - | - | - |
-| Vulnerability assessment | View vulnerabilities for running images | - | - | - | - | - |
+| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| GKE | Preview | Γ£ô | Agentless | Defender for Containers |
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### [**On-prem/IasS (Arc)**](#tab/iass-arc)
+### [**On-prem/IaaS (Arc)**](#tab/iaas-arc)
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
-| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| Vulnerability assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
-| Vulnerability assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Windows machines**](#tab/features-windows)
-| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
|--|::|::|::|::| | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Linux machines**](#tab/features-linux)
-| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
|--|::|::|::|::| | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | - | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
For information about when recommendations are generated for each of these solut
| - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA | | - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps-) | GA | Not Available | Not Available | | **Microsoft Defender plans and extensions** | | | |
-| - [Microsoft Defender for servers](./defender-for-servers-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA |
| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for container registries](./defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> |
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available | | - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA | | - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
-| **Microsoft Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
+| **Microsoft Defender for Servers features** <sup>[7](#footnote7)</sup> | | | |
| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA | | - [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA | | - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
For information about when recommendations are generated for each of these solut
<sup><a name="footnote6"></a>6</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
-<sup><a name="footnote7"></a>7</sup> These features all require [Microsoft Defender for servers](./defender-for-servers-introduction.md).
+<sup><a name="footnote7"></a>7</sup> These features all require [Microsoft Defender for Servers](./defender-for-servers-introduction.md).
<sup><a name="footnote8"></a>8</sup> There may be differences in the standards offered per cloud type.
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/windows-admin-center-integration.md
By combining these two tools, Defender for Cloud becomes your single pane of gla
* An Azure Gateway is registered. * The server has a workspace to report to and an associated subscription. * Defender for Cloud's Log Analytics solution is enabled on the workspace. This solution provides Microsoft Defender for Cloud's features for *all* servers and virtual machines reporting to this workspace.
- * Microsoft Defender for servers is enabled on the subscription.
+ * Microsoft Defender for Servers is enabled on the subscription.
* The Log Analytics agent is installed on the server and configured to report to the selected workspace. If the server already reports to another workspace, it's configured to report to the newly selected workspace as well. > [!NOTE]
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
If you're using the SDK, you can upload multiple model files with the `CreateMod
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModels_multi":::
-If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/cli/azure/dt), you can also upload multiple models by placing multiple model definitions in a single JSON file to be uploaded together. In this case, the models should placed in a JSON array within the file, like in the following example:
+If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/cli/azure/dt), you can also upload multiple models by placing multiple model definitions in a single JSON file to be uploaded together. In this case, the models should be placed in a JSON array within the file, like in the following example:
:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
Azure Digital Twins doesn't prevent this state, so be careful to patch twins app
## Next steps See how to create and manage digital twins based on your models:
-* [Manage digital twins](how-to-manage-twin.md)
+* [Manage digital twins](how-to-manage-twin.md)
event-grid Auth0 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-how-to.md
Title: How to send events from Auth0 to Azure using Azure Event Grid description: How to end events from Auth0 to Azure services with Azure Event Grid. Previously updated : 07/22/2021 Last updated : 03/29/2022 # Integrate Azure Event Grid with Auth0- This article describes how to connect your Auth0 and Azure accounts by creating an Event Grid partner topic.
-See the [Auth0 event type codes](https://auth0.com/docs/logs/references/log-event-type-codes) for a full list of the events that Auth0 supports
+> [!NOTE]
+> See the [Auth0 event type codes](https://auth0.com/docs/logs/references/log-event-type-codes) for a full list of the events that Auth0 supports
## Send events from Auth0 to Azure Event Grid To send Auth0 events to Azure:
-1. Enable Event Grid resource provider
-1. Set up an Auth0 Partner Topic in the Auth0 Dashboard
-1. Activate the Partner Topic in Azure
-1. Subscribe to events from Auth0
-
-For more information about these concepts, see Event Grid [concepts](concepts.md).
-
-### Enable Event Grid resource provider
-Unless you've used Event Grid before, you'll need to register the Event Grid resource provider. If youΓÇÖve used Event Grid before, skip to the next section.
+1. [Register the Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with your Azure subscription.
+2. [Authorize Auth0](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
+3. Request Auth0 to enable events flow to a partner topic by [setting up an Auth0 partner topic](#set-up-an-auth0-partner-topic) in the Auth0 Dashboard.
+4. [Activate partner topic](subscribe-to-partner-events.md#activate-a-partner-topic) so that your events start flowing to your partner topic.
+5. [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events).
-In the Azure portal:
-1. Select Subscriptions on the left menu
-1. Select the subscription youΓÇÖre using for Event Grid
-1. On the left menu, under Settings, select Resource providers
-1. Find Microsoft.EventGrid
-1. Select Register
-1. Refresh to make sure the status changes to Registered
+This article provides steps for doing the task #3 from the above list. All other tasks are documented in the [Subscribe to partner events](subscribe-to-partner-events.md) article.
-### Set up an Auth0 Partner Topic
-Part of the integration process is to set up Auth0 for use as an event source (this step happens in your [Auth0 Dashboard](https://manage.auth0.com/)).
+## Set up an Auth0 partner topic
+Part of the integration process is to set up Auth0 for use as an event source by using the [Auth0 Dashboard](https://manage.auth0.com/).
1. Log in to the [Auth0 Dashboard](https://manage.auth0.com/).
-1. Navigate to Logs > Streams.
-1. Click + Create Stream.
-1. Select Azure Event Grid and enter a unique name for your new stream.
-1. Create the event source by providing your Azure Subscription ID, Azure Region, and a Resource Group name.
-1. Click Save.
-
-### Activate your Auth0 Partner Topic in Azure
-Activating the Auth0 topic in Azure allows events to flow from Auth0 to Azure.
-
-1. Log in to the Azure portal.
-1. Search `Partner Topics` at the top and click `Event Grid Partner Topics` under services.
-1. Click on the topic that matches the stream you created in your Auth0 Dashboard.
-1. Confirm the `Source` field matches your Auth0 account.
-1. Click Activate.
-
-### Subscribe to Auth0 events
-
-#### Create an event handler
-To test your Partner Topic, you'll need an event handler. Go to your Azure subscription and spin up a service that is supported as an [event handler](event-handlers.md) such as an [Azure Function](custom-event-to-function.md).
-
-#### Subscribe to your Auth0 Partner Topic
-Subscribing to your Auth0 Partner Topic allows you to tell Event Grid where you want your Auth0 events to go.
-
-1. On the Partner Topic blade for your Auth0 integration, select + Event Subscription at the top.
-1. On the Create Event Subscription page:
- 1. Enter a name for the event subscription.
- 1. Select the Azure service or Webhook you created for the Endpoint type.
- 1. Follow the instructions for the particular service.
- 1. Click Create.
-
-## Testing
-Your Auth0 Partner Topic integration with Azure should be ready to use.
-
-### Verify the integration
+1. Navigate to **Monitoring** > **Streams**.
+1. Click **+ Create Log Stream**.
+1. Select **Azure Event Grid** and enter a unique name for your new stream.
+1. For **Subscription ID**, enter your Azure subscription ID.
+1. For **Azure Region**, select the Azure region in which the resource group exists.
+1. For **Resource Group**, enter the name of the resource group.
+1. For **Filter by Event Category**, select **All** or filter for specific types of events.
+1. Select **Use a specific day and time to start the stream from** option if you want the streaming to start on a specific day and time.
+1. Click **Save**.
+
+You should see the partner topic in the resource group you specified. [Activate the partner topic](subscribe-to-partner-events.md#activate-a-partner-topic) so that your events start flowing to your partner topic. Then, [subscribe to events](subscribe-to-partner-events.md#subscribe-to-events).
+
+
+## Verify the integration
To verify that the integration is working as expected: 1. Log in to the Auth0 Dashboard.
-1. Navigate to Logs > Streams.
-1. Click on your Event Grid stream.
-1. Once on the stream, click on the Health tab. The stream should be active and as long as you don't see any errors, the stream is working.
+1. Navigate to **Logs** > **Streams**.
+1. Click on your **Event Grid stream**.
+1. Once on the stream, click on the **Health** tab. The stream should be active and as long as you don't see any errors, the stream is working.
Try [invoking any of the Auth0 actions that trigger an event to be published](https://auth0.com/docs/logs/references/log-event-type-codes) to see events flow. ## Delivery attempts and retries
-Auth0 events are delivered to Azure via a streaming mechanism. Each event is sent as it is triggered in Auth0. If Event Grid is unable to receive the event, Auth0 will retry up to three times to deliver the event. Otherwise, Auth0 will log the failure to deliver in its system.
+Auth0 events are delivered to Azure via a streaming mechanism. Each event is sent as it's triggered in Auth0. If Event Grid is unable to receive the event, Auth0 will retry up to three times to deliver the event. Otherwise, Auth0 will log the failure to deliver in its system.
## Next steps - [Auth0 Partner Topic](auth0-overview.md) - [Partner topics overview](partner-events-overview.md)-- [Become an Event Grid partner](partner-onboarding-overview.md)
+- [Become an Event Grid partner](onboard-partner.md)
event-grid Auth0 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-overview.md
Last updated 07/22/2021
Auth0, the identity platform for application builders, provides developers and enterprises with the building blocks they need to secure their applications.
-The Auth0 partner topic allows you to use events that are emitted by Auth0's system to accomplish a number of tasks. Engage with users in meaningful ways after the authentication or automate security and infrastructure tasks.
+The Auth0 partner topic allows you to use events that are emitted by Auth0's system to accomplish many tasks. Engage with users in meaningful ways after the authentication or automate security and infrastructure tasks.
The integration allows you to stream your Auth0 log events with high reliability into Azure. There, you can consume the events with your favorite Azure resources. This integration allows you to react to events, gain insights, monitor for security issues, and interact with other powerful data pipelines.
Delivering a strong user experience is critical to reducing churn and keeping yo
### Understand user behavior Understand when users access your product, where they're signed in, and what devices they use. Develop an understanding of the product areas that matter most by keeping track of these signals. These signals help you determine:-- What browsers and devices to support. -- What languages to localize your app in. -- When your peak traffic times are. +
+- Browsers and devices to support
+- Languages to localize your app
+- Peak traffic times
### Manage user data Keeping and auditing your user actions is crucial for maintaining security and following industry regulations. The ability to edit, remove, or export user data is increasingly important to following privacy laws, such as the European Union's General Data Protection Regulation (GDPR).
Combining security monitoring and incident response procedures is important when
- [Partner topics overview](partner-events-overview.md) - [How to use the Auth0 partner topic](auth0-how-to.md) - [Auth0 documentation](https://auth0.com/docs/azure-tutorial)-- [Become an Event Grid partner](partner-onboarding-overview.md)
+- [Become an Event Grid partner](onboard-partner.md)
event-grid Cloud Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloud-event-schema.md
Here is an example of an Azure Blob Storage event in CloudEvents format:
"source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Storage/storageAccounts/{storage-account}", "id": "9aeb0fdf-c01e-0131-0922-9eb54906e209", "time": "2019-11-18T15:13:39.4589254Z",
- "subject": "blobServices/default/containers/{storage-container}/blobs/{new-file}",
- "dataschema": "#",
+ "subject": "blobServices/default/containers/{storage-container}/blobs/{new-file}",
"data": { "api": "PutBlockList", "clientRequestId": "4c5dd7fb-2c48-4a27-bb30-5361b5de920a",
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
This article describes the main concepts in Azure Event Grid.
An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. For example, an event about a new file being created in Azure Storage has details about the file, such as the `lastTimeModified` value. Or, an Event Hubs event has the URL of the Capture file.
-The maximum allowed size for an event is 1 MB. Events over 64 KB are charged in 64-KB increments. For the properties that are sent in an event, see [Azure Event Grid event schema](event-schema.md).
+The maximum allowed size for an event is 1 MB. Events over 64 KB are charged in 64-KB increments. For the properties that are sent in an event, see [CloudEvents schema](cloud-event-schema.md).
## Publishers
-A publisher is the user or organization that decides to send events to Event Grid. Microsoft publishes events for several Azure services. You can publish events from your own application. Organizations that host services outside of Azure can publish events through Event Grid.
+A publisher is the user or organization that sends events to Event Grid. Microsoft publishes events for several Azure services. You can publish events from your own application. Organizations that host services outside of Azure can publish events through Event Grid.
+
+## Partners
+
+A partner is a kind of publisher that sends events from its system to make them available to Azure customers. A partner is typically a SaaS or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning?) provider that integrates with Azure Event Grid to help customers realize event-driven use cases across platforms. Partners not only can publish events to Azure Event Grid, but they can also receive events from it. These capabilities are enabled through the [Partner Events](partner-events-overview.md) feature.
## Event sources
For information about implementing any of the supported Event Grid sources, see
## Topics
-The event grid topic provides an endpoint where the source sends events. The publisher creates the event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to.
+Event Grid topic provides an endpoint where the source sends events. The publisher creates the Event Grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to.
-**System topics** are built-in topics provided by Azure services such as Azure Storage, Azure Event Hubs, and Azure Service Bus. You can create system topics in your Azure subscription and subscribe to them. For more information, see [Overview of system topics](system-topics.md).
+### System topics
+System topics are built-in topics provided by Azure services such as Azure Storage, Azure Event Hubs, and Azure Service Bus. You can create system topics in your Azure subscription and subscribe to them. For more information, see [Overview of system topics](system-topics.md).
-**Custom topics** are application and third-party topics. When you create or are assigned access to a custom topic, you see that custom topic in your subscription. For more information, see [Custom topics](custom-topics.md). When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want.
+### Customer topics
+Custom topics are application and third-party topics. When you create or are assigned access to a custom topic, you see that custom topic in your subscription. For more information, see [Custom topics](custom-topics.md). When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want.
-There is another type of topic: **partner topic**. The [Partner Events](partner-events-overview.md) feature allows a third-party SaaS provider to publish events from its services to make them available to consumers who can subscribe to those events. The SaaS provider exposes a topic type, a **partner topic**, that subscribers use to consume events. It also offers a clean pub-sub model by separating concerns and ownership of resources that are used by event publishers and subscribers.
+### Partner topics
+Partner topics are a kind of topic used to subscribe to events published by a [partner](#partners). The feature that enables this type of integration is called [Partner Events](partner-events-overview.md). Through that integration, you get a partner topic where events from a partner system are made available. Once you have a partner topic, you create an [event subscription](#event-subscriptions) as you would do for any other kind of topic.
## Event subscriptions
-A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the subscription, you provide an endpoint for handling the event. You can filter the events that are sent to the endpoint. You can filter by event type, or subject pattern. For more information, see [Event Grid subscription schema](subscription-creation-schema.md).
+A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the subscription, you provide an endpoint for handling the event. You can filter the events that are sent to the endpoint. You can filter by event type or event subject, for example. For more information, see [Event Subscriptions](subscribe-through-portal.md) and [CloudEvents schema](cloud-event-schema.md).
For examples of creating subscriptions, see:
For examples of creating subscriptions, see:
* [Azure PowerShell samples for Event Grid](powershell-samples.md) * [Azure Resource Manager templates for Event Grid](template-samples.md)
-For information about getting your current event grid subscriptions, see [Query Event Grid subscriptions](query-event-subscriptions.md).
+For information about getting your current Event Grid subscriptions, see [Query Event Grid subscriptions](query-event-subscriptions.md).
## Event subscription expiration The event subscription is automatically expired after that date. Set an expiration for event subscriptions that are only needed for a limited time and you don't want to worry about cleaning up those subscriptions. For example, when creating an event subscription to test a scenario, you might want to set an expiration.
For an example of setting an expiration, see [Subscribe with advanced filters](h
## Event handlers
-From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. Event Grid supports several handler types. You can use a supported Azure service or your own webhook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event handlers, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For Azure Storage Queue, the events are retried until the Queue service successfully processes the message push into the queue.
+From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. Event Grid supports several handler types. You can use a supported Azure service, your own webhook or a [partner destination](#partner-destination) as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event handlers, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For Azure Storage Queue, the events are retried until the Queue service successfully processes the message push into the queue.
-For information about implementing any of the supported Event Grid handlers, see [Event handlers in Azure Event Grid](event-handlers.md).
+For information about delivering events to any of the supported Event Grid handlers, see [Event handlers in Azure Event Grid](event-handlers.md).
+
+### Partner destination
+A partner destination is a resource that is provisioned by a [partner](#partners) and represents a webhook URL on a partner service or application. Partner destinations are created for the purpose of forwarding events to a partner system to enable event-driven integration across platforms. This way, a partner destination can be seen as a type of [event handler](#event-handlers) that you can configure in your event subscription for any kind of topic. For more information, see [Partner Events Overview](partner-events-overview.md).
## Security
-Event Grid provides security for subscribing to topics, and publishing topics. When subscribing, you must have adequate permissions on the resource or event grid topic. When publishing, you must have a SAS token or key authentication for the topic. For more information, see [Event Grid security and authentication](security-authentication.md).
+Event Grid provides security for subscribing to topics, and publishing topics. When subscribing, you must have adequate permissions on the resource or Event Grid topic. When publishing, you must have a SAS token or key authentication for the topic. For more information, see [Event Grid security and authentication](security-authentication.md).
## Event delivery
If Event Grid can't confirm that an event has been received by the subscriber's
## Batching
-When using a custom topic, events must always be published in an array. This can be a batch of one for low-throughput scenarios, however, for high volume use cases, it's recommended that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB and the maximum size of an event is 1 MB.
+When you use a custom topic, events must always be published in an array. This can be a batch of one for low-throughput scenarios, however, for high volume use cases, it's recommended that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB and the maximum size of an event is 1 MB.
+ ## Next steps * For an introduction to Event Grid, see [About Event Grid](overview.md).
-* To quickly get started using Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
+* To quickly get started using Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
event-grid Deliver Events To Partner Destinations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/deliver-events-to-partner-destinations.md
+
+ Title: Azure Event Grid - deliver events to partner destinations
+description: This article explains how to use a partner destination as a handler for events.
+ Last updated : 03/31/2022++
+# Deliver events to a partner destination (Azure Event Grid)
+In the Azure portal, when creating an event subscription for a topic (system topic, custom topic, domain topic, or partner topic) or a domain, you can specify a partner destination as an endpoint. This article shows you how to create an event subscription using a partner destination so that events are delivered to a partner system.
+
+## Overview
+As an end user, you give your partner the authorization to create a partner destination in a resource group within your Azure subscription. For details, see [Authorize partner to create a partner destination](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic).
+
+A partner creates a channel that in turn creates a partner destination in the Azure subscription and a resource group you provided to the partner. Prior to using it, you must activate the partner destination. Once activated, you can select the partner destination as a delivery endpoint when creating or updating event subscriptions.
+
+## Activate a partner destination
+Before you can use a partner destination as an endpoint for an event subscription, you need to activate the partner destination.
+
+1. In the search bar of the Azure portal, search for and select **Event Grid Partner Destinations**.
+1. On the **Event Grid Partner Destinations** page, select the partner destination in the list.
+1. Review the activate message, and select **Activate** on the page or on the command bar to activate the partner topic before the expiration time mentioned on the page.
+1. Confirm that the activation status is set to **Activated**.
++
+## Create an event subscription using partner destination
+
+In the Azure portal, when creating an [event subscription](subscribe-through-portal.md), follow these steps:
+
+1. In the **Endpoint details** section, select **Partner Destination** for **Endpoint Type**.
+1. Click **Select an endpoint**.
+
+ :::image type="content" source="./media/deliver-events-to-partner-destinations/select-endpoint-link.png" alt-text="Screenshot showing the Create Event Subscription page with Select an endpoint link selected.":::
+1. On the **Select Partner Destination** page, select the **Azure subscription** and **resource group** that contains the partner destination.
+1. For **Partner Destination**, select a partner destination.
+1. Select **Confirm selection**.
+
+ :::image type="content" source="./media/deliver-events-to-partner-destinations/subscription-partner-destination.png" alt-text="Screenshot showing the Select Partner Destination page.":::
+1. On the **Create Event Subscription** page, confirm that you see **Endpoint Type** is set to **Partner Destination**, and the endpoint is set to a partner destination, and then select **Create**.
+
+ :::image type="content" source="./media/deliver-events-to-partner-destinations/partner-destination-configure.png" alt-text="Screenshot showing the Create Event Subscription page with a partner destination configured.":::
+
+## Next steps
+See the following articles:
+
+- [Authorize partner to create a partner destination](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic)
+- [Create a channel](onboard-partner.md#create-a-channel) - see the steps to create a channel with partner destination as the channel type.
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
Title: Onboard as an Azure Event Grid partner using Azure portal description: Use Azure portal to onboard an Azure Event Grid partner. Previously updated : 10/29/2020 Last updated : 03/31/2022 # Onboard as an Azure Event Grid partner using the Azure portal
-This article describes the way third-party SaaS providers, also known as event publishers or partners, are onboarded to Event Grid to be able to publish events from their services and how those events are consumed by end users.
+This article describes the way third-party SaaS providers, also known as [partners](concepts.md#partners) are onboarded to Event Grid to be able to publish events from their services and how those events are consumed by end users.
> [!IMPORTANT]
-> If you are not familiar with Partner Events, see [Partner Events overview](partner-events-overview.md) for a detailed introduction of key concepts that are critical to understand and follow the steps in this article.
+> Partners can publish events to Azure Event Grid, and also receive events from it. These capabilities are enabled through the [Partner Events](partner-events-overview.md) feature. If you aren't familiar with Partner Events, see [Partner Events overview](partner-events-overview-for-partners.md) for a detailed introduction of key concepts that are critical to understand and follow the steps in this article.
-## Onboarding process for event publishers (partners)
+## Onboarding process for partners
In a nutshell, enabling your serviceΓÇÖs events to be consumed by users typically involves the following process:
-1. **Communicate your interest** in becoming a partner to the Event Grid service team before proceeding with the next steps.
-1. Create a partner topic type by creating a **registration**.
-1. Create a **namespace**.
-1. Create an **event channel** and **partner topic** (single step).
-1. Test the Partner Events functionality end to end.
+1. [Communicate your interest in becoming a partner](#communicate-your-interest-in-becoming-a-partner) to the Event Grid service team.
+1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
+1. [Create a **partner registration**](#create-a-partner-registration).
+1. [Create a **namespace**](#create-a-partner-namespace).
+1. [Create a **channel** and a **partner topic** or a **partner destination** in a single step](#create-a-channel).
-For step #4, you should decide what kind of user experience you want to provide. You have the following options:
-- Provide your own solution, typically a web graphical user interface (GUI) experience, hosted under your domain using our SDK and/or REST API to create an event channel and its corresponding partner topic. With this option, you can ask the user for the subscription and resource group under which you'll create a partner topic.-- Use Azure portal or CLI to create the event channel and associated partner topic. With this option, you must have get in the userΓÇÖs Azure subscription some way and resource group under which you'll create a partner topic.
+ > [!IMPORTANT]
+ > You may be able to create an event channel (legacy), which supports only partner topics, not partner destinations. **Channel** is the new routing resource type and is the preferred option, which supports both sending events via partner topics and receiving events via partner destinations. An **event channel** is a legacy resource and will be deprecated soon.
+1. Test the Partner Events functionality end to end.
-This article shows you how to onboard as an Azure Event Grid partner using the Azure portal.
+For step #5, you should decide what kind of user experience you want to provide. You have the following options:
+- Provide your own solution, typically a web graphical user interface (GUI) experience, hosted under your domain using our SDK and/or REST API to create a channel (latest and recommended) /event channel (legacy) and its corresponding partner topic. With this option, you can ask the user for the subscription and resource group under which you'll create a partner topic.
+- Use Azure portal or CLI to create the channel (recommended)/event channel (legacy) and an associated partner topic. With this option, you must have get in the userΓÇÖs Azure subscription some way and resource group under which you'll create a partner topic.
-> [!NOTE]
-> Registering a partner topic type is an optional step. To help you decide if you should create a partner topic type, see [Resources managed by event publisher](partner-events-overview.md#resources-managed-by-event-publishers).
+This article shows you how to **onboard as an Azure Event Grid partner** using the **Azure portal**.
## Communicate your interest in becoming a partner Fill out [this form](https://aka.ms/gridpartnerform) and contact the Event Grid team at [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com). We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
To complete the remaining steps, make sure you have:
- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. - An Azure [tenant](../active-directory/develop/quickstart-create-new-tenant.md).
-## Register a partner topic type (optional)
++
+## Create a partner registration
+ 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** from the left navigation pane, then type in **Event Grid Partner Registrations** in the search bar, and select it. 1. On the **Event Grid Partner Registrations** page, select **+ Add** on the toolbar.
To complete the remaining steps, make sure you have:
1. For **Registration name**, enter a name for the registration. 1. For **Organization name**, enter the name of your organization. 1. In the **Partner resource type** section, enter details about your resource type that will be displayed on the **partner topic create** page:
- 1. For **Partner resource type name**, enter the name for the resource type. This will be the type of partner topic that will be created in your Azure subscription.
+ 1. For **Partner resource type name**, enter the name for the resource type.
2. For **Display name**, enter a user-friendly display name for the partner topic (resource) type. 3. Enter a **description for the resource type**. 4. Enter a **description for the scenario**. It should explain the ways or scenarios in which the partner topics for your resources can be used. :::image type="content" source="./media/onboard-partner/create-partner-registration-page.png" alt-text="Create partner registration":::
-1. Select **Next: Custom Service** at the bottom of the page. On the **Customer Service** tab of the **Create Partner Registration** page, enter information that subscriber users will use to contact you in case of a problem with the event source:
+1. Select **Next: Custom Service** at the bottom of the page. On the **Customer Service** tab of the **Create Partner Registration** page, enter information that subscriber users will use to contact you when there's a problem with the event source:
1. Enter the **Phone number**. 1. Enter **extension** for the phone number. 1. Enter a support web site **URL**.
To complete the remaining steps, make sure you have:
1. On the **Tags** page, configure the following values. 1. Enter a **name** and a **value** for the tag you want to add. This step is **optional**. 1. Select **Review + create** at the bottom of the page to create the registration (partner topic type).
+1. On the **Review + create** page, review all settings, and then select **Create** to create the partner registration.
## Create a partner namespace 1. In the Azure portal, select **All services** from the left navigational menu, then type **Event Grid Partner Namespaces** in the search bar, and then select it from the list.
-1. On the **Event Grid Partner Namespaces** page, select **+ Add** on the toolbar.
+1. On the **Event Grid Partner Namespaces** page, select **+ Create** on the toolbar.
:::image type="content" source="./media/onboard-partner/add-partner-namespace-link.png" alt-text="Partner namespaces - Add link"::: 1. On the **Create Partner Namespace - Basics** page, specify the following information.
To complete the remaining steps, make sure you have:
1. In the **Namespace details** section, do the following steps: 1. Enter a **name** for the namespace. 1. Select a **location** for the namespace.
- 1. In the **Registration details** section, do the following steps to associate the namespace with a partner registration.
+ 1. For **Partner topic routing mode**, select **Channel name header** or **Source attribute in event**.
+
+ - **Channel name header** routing: With this kind of routing, you publish events using an http header called `aeg-channel-name` where you provide the name of the channel to which events should be routed. If you select this option, you'll create **channels** in the namespace.
+ - **Source attribute in event** routing. This routing approach is based on the value of the `source` context attribute in the event. If you select this option, you'll create **event channels**, which is legacy equivalent to **channels** and will be deprecated soon.
+
+ > [!IMPORTANT]
+ > - It's not possible to update the routing mode once the namespace is created.
+ > - **Channel** is the new routing resource type and is the preferred option. An event channel is a legacy resource and will be deprecated soon.
+ 1. In the **Registration details** section, follow these steps to associate the namespace with a partner registration.
1. Select the **subscription** in which the partner registration exists. 1. Select the **resource group** that contains the partner registration. 1. Select the **partner registration** from the drop-down list.
To complete the remaining steps, make sure you have:
1. Select **Review + create** at the bottom of the page. 1. On the **Review + create** page, review the details, and select **Create**.
+## Create a channel
+If you selected **Channel name header** for **Partner topic routing mode**, create a channel by following steps in this section.
+
+1. Go to the **Overview** page of the partner namespace you created, and select **+ Channel** on the command bar.
+
+ :::image type="content" source="./media/onboard-partner/create-channel-button.png" lightbox="./media/onboard-partner/create-channel-button.png" alt-text="Image showing the selection of Create Channel button on the command bar of the Event Grid Partner Namespace page.":::
+1. On the **Create Channel - Basics** page, follow these steps.
+ 1. Enter a **name** for the channel. Channel name should be unique across the region in which is created.
+ 1. For the channel type, select **Partner Topic** or **Partner Destination**.
+
+ Partner topics are resources that hold published events. Partner destinations define target endpoints or services to which events are delivered.
+
+ Select **Partner Topic** if you want to **forward events to a partner topic** that holds events to be processed by a handler later.
+
+ Select **Partner Destination** if you want to **forward events to a partner service** that processes the events.
+ 3. If you selected **Partner Topic**, enter the following details:
+ 1. **ID of the subscription** in which the partner topic will be created.
+ 1. **Resource group** in which the partner topic will be created.
+ 1. **Name** of the partner topic.
+ 1. Specify **source** information for the partner topic. Source is contextual information on the source of events provided by the partner that the end user can see. This information is helpful when end user is considering activating a partner topic, for example.
+
+ :::image type="content" source="./media/onboard-partner/channel-partner-topic-basics.png" alt-text="Image showing the Create Channel - Basics page.":::
+ 1. If you selected **Partner Destination**, enter the following details:
+ 1. **ID of the subscription** in which the partner topic will be created.
+ 1. **Resource group** in which the partner topic will be created.
+ 1. **Name** of the partner topic.
+ 1. In the **Endpoint Details** section, specify the following values.
+ 1. For **Endpoint URL**, specify the endpoint URL to which events are delivered.
+ 1. For **Endpoint context**, enter additional information about the destination to which events will be sent that can help end users understand the location to which events are delivered.
+ 1. For **Azure AD tenant ID**, specify the Azure Active Directory tenant ID used by Event Grid to authenticate to the destination endpoint URL.
+ 1. For **Azure AD app ID or URI**, specify the Azure AD application ID (also called client ID) or application URI used by Event Grid to authenticate to the destination endpoint URL.
+
+ :::image type="content" source="./media/onboard-partner/create-channel-partner-destination.png" alt-text="Image showing the Create Channel page with partner destination options.":::
+ 1. Select **Next: Additional Features** link at the bottom of the page.
+ 1. On the **Additional Features** page, follow these steps:
+ 1. To set your own activation message that can help end user to activate the associated partner topic, select the check box next to **Set your own activation message**, and enter the message.
+ 1. For **expiration time**, set the time after this channel is created at which the associated partner topic and this channel will be automatically deleted if not activated by the end user.
+ 1.Select **Next: Review + create**.
+
+ :::image type="content" source="./media/onboard-partner/create-channel-additional-features.png" alt-text="Image showing the Create Channel - Additional Features page.":::
+ 1. On the **Review + create** page, review all the settings for the channel, and select **Create** at the bottom of the page.
+
+ **Partner topic** option:
+ :::image type="content" source="./media/onboard-partner/create-channel-review-create.png" alt-text="Image showing the Create Channel - Review + create page.":::
+
+ **Partner destination** option:
+ :::image type="content" source="./media/onboard-partner/create-channel-review-create-destination.png" alt-text="Image showing the Create Channel - Review + create page when the Partner Destination option is selected.":::
+
+
+
## Create an event channel+
+If you selected **Source attribute in event** for **Partner topic routing mode**, create an event channel by following steps in this section.
+ > [!IMPORTANT]
-> You'll need to request from your user an Azure subscription, resource group, location, and partner topic name to create a partner topic that your user will own and manage.
+> - **Channel** is the new routing resource type and is the preferred option. An **event channel** is a legacy resource and will be deprecated soon.
1. Go to the **Overview** page of the namespace you created.
To complete the remaining steps, make sure you have:
1. Select **Next: Additional features** at the bottom of the page. :::image type="content" source="./media/onboard-partner/create-event-channel-filters-page.png" alt-text="Create event channel - filters page":::
- create-event-channel-filters-page.png
1. On the **Additional features** page, you can set an **expiration time** and **description for the partner topic**. 1. The **expiration time** is the time at which the topic and its associated event channel will be automatically deleted if not activated by the customer. A default of seven days is used in case a time isn't provided. Select the checkbox to specify your own expiration time. 1. As this topic is a resource that's not created by the user, a **description** can help the user with understanding the nature of this topic. A generic description will be provided if none is set. Select the checkbox to set your own partner topic description.
To complete the remaining steps, make sure you have:
:::image type="content" source="./media/onboard-partner/create-event-channel-additional-features-page.png" alt-text="Create event channel - additional features page"::: 1. On the **Review + create**, review the settings, and select **Create** to create the event channel.
+## Activate partner topics and partner destinations
+Before your users can subscribe to partner topics you create in their Azure subscriptions, they'll have activate partner topics first. For details, see [Activate a partner topic](subscribe-to-partner-events.md#activate-a-partner-topic).
+
+Similarly, before your user can use the partner destinations you create in their subscriptions, they'll have to activate partner destinations first. For details, see [Activate a partner destination](deliver-events-to-partner-destinations.md#activate-a-partner-destination).
+
## Next steps - [Partner topics overview](./partner-events-overview.md)-- [Partner topics onboarding form](https://aka.ms/gridpartnerform)
+- [Partner topics onboarding page](https://aka.ms/gridpartnerform)
- [Auth0 partner topic](auth0-overview.md) - [How to use the Auth0 partner topic](auth0-how-to.md)
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
+
+ Title: Partner Events overview for system owners who desire to become partners
+description: Provides an overview of the concepts and general steps to become a partner.
+ Last updated : 03/31/2021++
+# Partner Events overview for partners - Azure Event Grid (preview)
+Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale.
+
+> [!NOTE]
+> This is a conceptual article that's required reading before you decide to onboard as a partner to Azure Event Grid. For step-by-step instructions on how to onboard as an Event Grid partner using the Azure portal, see [How to onboard as an Event Grid partner (Azure portal)](onboard-partner.md).
+
+## Partner Events: How it works
+
+As a partner, you create Event Grid resources that enable to you publish events to Azure Event Grid so that customers on Azure can subscribe to them. For most partners, for example SaaS providers, it's the only integration capability that they'll use.
+
+You can also create Event Grid resources to receive events from Azure Event Grid. This use case is for those organizations that own or manage a platform that enables their customers to receive events by exposing endpoints. Some of those organizations are ERP systems that also have event routing capabilities within their platform, which sends the incoming Azure events to a customer application hosted on their platform.
+
+For either publishing events or receiving events, you create the same kind of Event Grid [resources](#resources-managed-by-partners) following these general steps.
+
+1. Communicate your interest in becoming a partner by sending an email to [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com). Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal.
+2. Create a [partner registration](#partner-registration). This is a global resource and you usually need to create once.
+3. Create a [partner namespace](#partner-namespace). This resource exposes an endpoint to which you can publish events to Azure. When creating the partner namespace, provide the partner registration you created.
+4. Customer authorizes you to create a partner resource, either a [partner topic](concepts.md#partner-topics) or a [partner destination](concepts.md#partner-destination), in customer's Azure subscription.
+5. Customer accesses your web page or executes a command, you define the user experience, to request either the flow of your events to Azure or the ability to receive Microsoft events into your system. In response to that request, you set up your system to do so with input from the customer. For example, the customer may have the option to select certain events from your system that should be forwarded to Azure.
+6. According to customer's requirements, you create a partner topic or a partner destination under the customer's Azure subscription, resource group and with the name the customer provides to you. It's achieved by using channels. Create a [channel](#channel) of type `partner topic`, if the customer wants to receive your events on Azure, or `partner destination` if the customer wants to send events to your system. Channels are resources contained by partner namespaces.
+7. Customer activates the partner topic or the partner destination that you created in their Azure subscription and resource group.
+8. If you created a partner topic, start publishing events to your partner namespace. If you created a partner destination, expect events coming to your system endpoints defined in the partner definition.
+
+ >[!NOTE]
+ > You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) to every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
++
+## Why should I use Partner Events?
+You may want to use the Partner Events feature if you've one or more of the following requirements.
+
+### For partners as event publishers
+
+- You want a mechanism to make your events available to your customers on Azure. Your users can filter and route those events by using partner topics and event subscriptions they own and manage. You could use other integration approaches such as [topics](custom-topics.md) and [domains](event-domains.md). However, those approaches wouldn't allow for a clean separation of resource ownership, management, and billing between you and your customer. The Partner Events feature also provides a more intuitive user experience that makes it easy to discover your service.
+- You need a simple multi-tenant model where you publish events to a single regional endpoint, the namespaceΓÇÖs endpoint, to route the events to different customers.
+- You want to have visibility into metrics related to published events.
+- You want to use [Cloud Events 1.0](https://cloudevents.io/) schema for your events.
+
+### For partners as a subscriber
+
+- You want your service to react to customer events that originate in Microsoft/Azure.
+- You want your customer to react to Microsoft/Azure service events using their applications hosted by your platform. You use your platform's event routing capabilities to deliver events to the right customer solution.
+- You want a simple model where your customers just select your service name as a destination without the need for them to know technical details like your platform endpoints.
+- Your system/platform supports [Cloud Events 1.0](https://cloudevents.io/) schema.
+
+## Resources managed by partners
+As a partner, you manage the following types of resources.
+
+### Partner registration
+A registration holds general information related to a partner. A registration is required when creating a partner namespace. That is, you must have a partner registration to create the necessary Azure resources to integrate with Azure Event Grid.
+
+Registrations are global. That is, they aren't associated with a particular Azure region. You may create a single partner registration and use that when creating your partner namespaces.
+
+### Channel
+A Channel is a nested resource to a Partner Namespace. A channel has two main purposes:
+ - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations, that enable bi-directional event integration.
+
+ A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted.
+ - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes.
+ - **Channel name routing**. With this kind of routing, you publish events using an http header called `aeg-channel-name` where you provide the name of the channel to which events should be routed. As channels are a partner's representation of partner topics, the events routed to the channel show on the customer's parter topic. This kind of routing is a new capability not present in `event channels`, which support only source-based routing. Channel name routing enables more use cases than the source-based routing and it's the recommended routing mode to choose. For example, with channel name routing a customer can request events that originate in different event sources to land on a single partner topic.
+ - **Source-based routing**. This routing approach is based on the value of the `source` context attribute in the event. Sources are mapped to channels and when an event comes with a source, say, of value "A" that event is routed to the partner topic associated to the channel that contains "A" in its source property.
+
+ A channel of type ``partner destination`` is used to route events to a partner system. When creating a channel of this type, you provide your webhook URL where you receive the events published by Azure Event Grid. Once the channel is created, a customer can use the partner destination resource when creating an [event subscription](subscribe-through-portal.md) as the destination to deliver events to the partner system. Event Grid publishes events with the request including an http header `aeg-channel-name` too. Its value can be used to associate the incoming events with a specific user who in the first place requested the partner destination.
+
+ A customer can use your partner destination to send your service any kind of events available to [Event Grid](overview.md).
+
+### Partner namespace
+A partner namespace is a regional resource that has an endpoint to publish events to Azure Event Grid. Partner namespaces contain either channels or event channels (legacy resource). You must create partner namespaces in regions where customers request partner topics or destinations because channels and their corresponding partner resources must reside in the same region. You can't have a channel in a given region with its related partner topic, for example, located in a different region.
+
+Partner namespaces contain either channels or event Channels. It's determined by the property **partner topic routing mode** in the namespace. If it's set to **Channel name header**, channels are the only type of resource that can be created under the namespace. If partner topic routing mode is set to **Source attribute in event**, then the namespace can only contain event channels. Mind that the decision of setting the right ``partner topic routing mode`` isn't a decision between choosing channel name or source-based routing. Channels support both. It's rather a decision between using the new type of routing resource, the channels, versus using a legacy resource, the event channels.
+
+### Event channel
+
+An Event channel is the resource that was first released with Partner Events to route incoming events to partner topics. Event channels only support source-based routing and they always represent a customer partner topic.
+
+>[!IMPORTANT]
+>Event channels are being deprecated. Hence, it is advised that you use Channels.
+
+## Verified partners
+
+A verified partner is a partner organization whose identity has been validated by Microsoft. It's strongly encouraged that your organization gets verified. Customers seek to engage with partners that have been verified as such verification provides greater assurances that they're dealing with a legitimate organization. Once verified, you benefit from having a presence on the [Event Grid Gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) where customers can discover your service easily and have a first-party experience when subscribing to your events, for example.
+
+## Customer's authorization to create partner topics and partner destinations
+
+Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription.
+
+>[!IMPORTANT]
+>A verified partner is not an authorized partner. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
+
+## Partner topic and partner destination activation
+
+Customer activates the partner topic or destination you've created for them. At that point, the channel's activation status changes to **Activated**. Once a channel is activated, you can start publishing events to the partner namespace endpoint that contains the channel.
+
+### How do you automate the process to know when you can start publishing events for a given partner topic?
+
+You have two options:
+- Read (poll) the channel state periodically to check if the activation status has transitioned from **NeverActivated** to **Activated**. This operation can be computationally intensive.
+- Create an [event subscription](subscribe-through-portal.md) for the [Azure subscription](event-schema-subscriptions.md#available-event-types) or [resource group](event-schema-resource-groups.md#available-event-types) that contains the channel(s) you want to monitor. You'll receive `Microsoft.Resources.ResourceWriteSuccess` events whenever a channel is updated. You'll then need to read the state of the channel with the Azure Resource Manager ID provided in the event to ascertain that the update is related to a change in the activation status to **Activated**.
+
+## References
+
+ * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/master/specification/eventgrid/resource-manager/Microsoft.EventGrid/preview/2020-04-01-preview/EventGrid.json)
+ * [ARM template](/azure/templates/microsoft.eventgrid/allversions)
+ * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json)
+ * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
+ * [CLI extension](/cli/azure/eventgrid)
+
+### SDKs
+ * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
+ * [Python](https://pypi.org/project/azure-mgmt-eventgrid/3.0.0rc6/)
+ * [Java](https://search.maven.org/artifact/com.microsoft.azure.eventgrid.v2020_04_01_preview/azure-mgmt-eventgrid/1.0.0-beta-3/jar)
+ * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/versions/0.19.0)
+ * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid/v/7.0.0)
+ * [Go](https://github.com/Azure/azure-sdk-for-go)
++
+## Next steps
+- [How to onboard as an Event Grid partner (Azure portal)](onboard-partner.md)
+- [Partner topics onboarding form](https://aka.ms/gridpartnerform)
+- [Partner topics overview](partner-events-overview.md)
+- [Auth0 partner topic](auth0-overview.md)
+- [How to use the Auth0 partner topic](auth0-how-to.md)
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
Title: Azure Event Grid - Partner Events
-description: Send events from third-party Event Grid SaaS and PaaS partners directly to Azure services with Azure Event Grid.
+ Title: Partner Events overview for customers
+description: Send or receive from a SaaS or ERP system directly to/from Azure services with Azure Event Grid.
Previously updated : 06/15/2021 Last updated : 03/31/2022
-# Partner Events in Azure Event Grid (preview)
-The **Partner Events** feature allows a third-party SaaS provider to publish events from its services so that consumers can subscribe to those events. This feature offers a first-party experience to third-party event sources by exposing a [topic](concepts.md#topics) type, a **partner topic**. Subscribers create subscriptions to this topic to consume events. It also provides a clean pub-sub model by separating concerns and ownership of resources that are used by event publishers and subscribers.
+# Partner Events overview for customers - Azure Event Grid (preview)
+Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Azure AD, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale.
> [!NOTE]
-> If you're new at using Event Grid, see [overview](overview.md), [concepts](concepts.md), and [event handlers](event-handlers.md).
+> If you're new to Event Grid, see the following articles that provide you with knowledge on foundational concepts:
+> - [Overview](overview.md)
+> - [Concepts](concepts.md)
+> - [Event handlers](event-handlers.md)
-## What is Partner Events to a publisher?
-To an event publisher, the Partner Events feature allows publishers to do the following tasks:
+## Receive events from a partner
-- Onboard their event sources to Event Grid-- Create a namespace (endpoint) to which they can publish events-- Create partner topics in Azure that subscribers own and use to consume events
+You receive events from a partner in a [partner topic](concepts.md#partner-topics) that's' created on your behalf by a partner. Here are the high-level steps to subscribe to events from a partner.
-## What is Partner Events to a subscriber?
-To a subscriber, the Partner Events feature allows them to create partner topics in Azure to consume events from third-party event sources. Event consumption is realized by creating event subscriptions that send (push) events to a subscriberΓÇÖs event handler.
+1. **Authorize partner to create a partner topic** in a resource group you designate. Authorizations are stored in partner configurations, Azure resources.
+2. Request partner to forward your events from its service to your partner topic. **Partner provisions a partner topic** in the specified resource group of your Azure subscription.
+3. After the partner creates a partner topic in your Azure subscription and resource group, **activate** your partner topic.
+4. **Subscribe to events** by creating one or more [event subscriptions](subscribe-through-portal.md) on the partner topic.
-## Why should I use Partner Events?
-You may want to use the Partner Events if you've one or more of the following requirements.
+
+> [!NOTE]
+> You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
-### For publishers
+## Send events to a partner
-- You want a mechanism to make your events available on Azure. Your users can filter and route those events by using partner topics and event subscriptions that they own and manage. You could use other integration approaches such as [topics](custom-topics.md) and [domains](event-domains.md). But, they wouldn't allow for a clean separation of resource (partner topics) ownership, management, and billing between publishers and subscribers. Also, this approach provides more intuitive user experience that makes it easy to discover and use partner topics.-- You want to publish events to a single endpoint, the namespaceΓÇÖs endpoint. And, you want the ability to filter those events so that only a subset of them is available. -- You want to have visibility into metrics related to published events.-- You want to use [Cloud Events 1.0](https://cloudevents.io/) schema for your events.
+The process to send events to a partner is similar to that of receiving events from a partner. You send events to a partner using a [partner destination](concepts.md#partner-destination) that's created by the partner upon your request. A partner destination is a kind of resource that contains information such as the partner's endpoint URL to which Event Grid sends events. Here are the steps to send events to a partner.
-### For subscribers
+1. **Authorize partner to create a partner destination** in a resource group you designate. Authorizations are stored in partner configurations.
+2. **Request partner to create a partner destination** resource in the specified Azure resource group in your Azure subscription. Prior to creating a partner destination, the partner should configure its system to be able to receive and, if supported, route your Microsoft events within its platform.
+1. After the partner creates a partner destination in your Azure subscription and resource group, **activate your partner destination**.
+1. **Subscribe to events** using [event subscriptions](subscribe-through-portal.md) on any kind of topic available to you: system topic (Azure services), custom topic or domain (your custom solutions) or a partner topic from another partner. When configuring your event subscription, select partner destination as the endpoint type and select the partner destination to which your events are going to start flowing.
-- You want to subscribe to events from [third-party publishers](#available-third-party-event-publishers) and handle the events using event handlers that are on Azure or elsewhere.-- You want to take advantage of the rich set of routing features and [destinations/event handlers](overview.md#event-handlers) to process events from third-party sources. -- You want to implement loosely coupled architectures where your subscriber/event handler is unaware of the existence of the message broker used. ++
+## Why should I use Partner Events?
+You may want to use the Partner Events feature if you've one or more of the following requirements.
+
+- You want to subscribe to events that originate in a [partner](#available-partners) system and route them to event handlers on Azure or to any application or service with a public endpoint.
+- You want to take advantage of the rich set Event Grid's[destinations/event handlers](overview.md#event-handlers) that react to events from partners.
+- You want to forward events raised by your custom application on Azure, an Azure service, or a Microsoft service to your application or service hosted by the [partner](#available-partners) system. For example, you may want to send Azure AD, Teams, SharePoint, or Azure Storage events to a partner system on which you're a tenant for processing.
- You need a resilient push delivery mechanism with send-retry support and at-least once semantics. - You want to use [Cloud Events 1.0](https://cloudevents.io/) schema for your events.
+
+## Available partners
+A partner must go through an [onboarding process](onboard-partner.md) before a customer can start receiving or sending events to partners. Following is the list of available partners and whether their services were designed to send events to or receive events from Event Grid.
-## Available third-party event publishers
-A third-party event publisher must go through an [onboarding process](partner-onboarding-overview.md) before a subscriber can start consuming its events.
-
+| Partner | Sends events to Azure? | Receives events from Azure? |
+| : |:--:|:-:|
+| Auth0 | Yes | N/A |
### Auth0
-**Auth0** is the first partner publisher available. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auto0](auth0-how-to.md)
+[Auth0](https://auth0.com) is a managed authentication platform for businesses to authenticate, authorize, and secure access for applications, devices, and users. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auto0](auth0-how-to.md).
-
-## Resources managed by event publishers
-Event publishers create and manage the following resources:
+## Verified partners
-### Partner registration
-A registration holds general information related to a publisher. It defines a type of partner topic that shows in the Azure portal as an option when users try to create a partner topic. A publisher may expose more than one or more partner topic types to fit the needs of its subscribers. That is, a publisher may create separate registrations (partner topic types) for events from different services. For example, for the human resources (HR) service, publisher may define a partner topic for events such as employee joined, employee promoted, and employee left the company.
+A verified partner is a partner organization whose identity has been validated by Microsoft. Not all partners are verified as verification is requested by the partner. However, all partners in the [Event Grid Gallery](https://ms.portal.azure.com/#create/Microsoft.EventGridPartnerTopic) have been vetted as verification is required before they can have a presence on the Azure portal.
-Keep in mind the following points:
+>[!IMPORTANT]
+>You should only work with verified partners. However, there are valid cases where you might work with partners that haven't been verified. For example, the partner may be a team in your own company that's the owner of a platform solution that publishes events to corporate applications.
-- Only Azure-approved partner registrations are visible. -- Registrations are global. That is, they aren't associated to a particular Azure region.-- A registration is an optional resource. But, we recommend that you (as a publisher) create a registration. It allows users to discover your topics on the **Create Partner Topic** page in the [Azure portal](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic). Then, user can select event types (for example, employee joined, employee left, and so on.) while creating event subscriptions.
+## Resources managed by customers
+You manage the following types of resources.
-### Namespace
-Like [custom topics](custom-topics.md) and [domains](event-domains.md), a partner namespace is a regional endpoint to publish events. It's through namespaces that publishers create and manage event channels. A namespace also functions as the container resource for event channels.
+- **Partner topic** is the resource where you receive your events from the partner.
+- **Partner destination** is a resource that represents the partner system to which you can send events.
+- **[Event subscriptions](subscribe-through-portal.md)** is where you select what events to forward to an Azure service, a partner destination or to a public webhook on Azure or elsewhere.
+- **Partner configurations** is the resource the holds your authorizations to partners to create partner resources.
+
+## Grant authorization to create partner topics and destinations
-### Event Channels
-An event channel is a mirrored resource to a partner topic. When a publisher creates an event channel in the publisherΓÇÖs Azure subscription, it also creates a partner topic under a subscriber's Azure subscription. The operations done against an event channel (except GET) will be applied to the corresponding subscriber partner topic, even deletion. However, only partner topics are the kind of resources on which subscriptions and event delivery can be configured.
+You must authorize partners to create partner topics or partner destinations before they attempt to create those resources. If you don't grant your authorization, the partners' attempt to create the partner resource will fail.
-## Resources managed by subscribers
-Subscribers can use partner topics defined by a publisher and it's the only type of resource they see and manage. Once a partner topic is created, a subscriber user can create event subscriptions defining filter rules to [destinations/event handlers](overview.md#event-handlers). To subscribers, a partner topic and its associated event subscriptions provide the same rich capabilities as [custom topics](custom-topics.md) and its related subscription(s) do with one notable difference: partner topics support only the [Cloud Events 1.0 schema](cloudevents-schema.md), which provides a richer set of capabilities than other supported schemas.
+You consent the partner to create partner topics or partner destinations by creating a **partner configuration** resource. You add a partner authorization to a partner configuration identifying the partner and providing an authorization expiration time by which a partner topic/destination must be created. The only types of resources that partners can create with your permission are partner topics and partner destinations.
-The following image shows the flow of control plane operations.
+>[!IMPORTANT]
+> A verified partner isn't an authorized partner. Even if a partner has been vetted by Microsoft, you still need to authorize it before the partner can create resources on your behalf.
+## Subscribe to events from a partner system
+For detailed instructions on how to subscribe to events published by a partner, see [subscribe to partner events](subscribe-to-partner-events.md).
-1. Publisher creates a **partner registration**. Partner registrations are global. That is, they aren't associated with a particular Azure region. This step is optional.
-1. Publisher creates a **partner namespace** in a specific region.
-1. When Subscriber 1 tries to create a partner topic, an **event channel**, Event Channel 1, is created in the publisher's Azure subscription first.
-1. Then, a **partner topic**, Partner Topic 1, is created in the subscriber's Azure subscription. The subscriber needs to activate the partner topic.
-1. Subscriber 1 creates an **Azure Logic Apps subscription** to Partner Topic 1.
-1. Subscriber 1 creates an **Azure Blob Storage subscription** to Partner Topic 1.
-1. When Subscriber 2 tries to create a partner topic, another **event channel**, Event Channel 2, is created in the publisher's Azure subscription first.
-1. Then, the **partner topic**, Partner Topic 2, is created in the second subscriber's Azure subscription. The subscriber needs to activate the partner topic.
-1. Subscriber 2 creates an **Azure Functions subscription** to Partner Topic 2.
## Pricing
-Partner topics are charged by the number of operations done when using Event Grid. For more information on all types of operations that are used as the basis for billing and detailed price information, see [Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/).
+Partner Events are charged by the number of operations done when routing events to or from Event Grid. For more information on all types of operations that are used as the basis for billing and detailed price information, see [Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/).
## Limits See [Event Grid Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#event-grid-limits) for detailed information about the limits in place for partner topics. ## Next steps-
+- [subscribe to partner events](subscribe-to-partner-events.md)
- [Auth0 partner topic](auth0-overview.md) - [How to use the Auth0 partner topic](auth0-how-to.md)-- [Become an Event Grid partner](partner-onboarding-overview.md)
event-grid Partner Onboarding Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-onboarding-overview.md
- Title: Partner onboarding overview (Azure Event Grid)
-description: Provides an overview how you can onboard as an Event Grid partner.
- Previously updated : 09/28/2021--
-# Partner onboarding overview (Azure Event Grid)
-
-This article describes how to privately use the Azure Event Grid partner resources and how to become a publicly available partner topic type.
-
-You don't need special permission to begin using the Event Grid resource types associated with publishing events as an Event Grid partner. In fact, you can use them today to publish events privately to your own Azure subscriptions and to test out the resource model if you're considering becoming a partner.
-
-> [!NOTE]
-> For step-by-step instruction on how to onboard as an Event Grid partner by using the Azure portal, see [How to onboard as an Event Grid partner (Azure portal)](onboard-partner.md).
-
-## How Partner Events work
-The Partner Events feature take the existing architecture that Event Grid already uses to publish events from Azure resources, such as Azure Storage and Azure IoT Hub, and makes those tools publicly available for anyone to use. Using these tools is by default private to your Azure subscription only. To make your events publicly available, fill out the form and [contact the Event Grid team](mailto:gridpartner@microsoft.com).
-
-The Partner Events feature allow you to publish events to Azure Event Grid for multitenant consumption.
-
-## Onboarding and event publishing overview
-
-### Partner flow
-
-1. Create an Azure tenant if you don't already have one.
-1. Use the Azure CLI to create a new Event Grid `partnerRegistration`. This resource includes information such as display name, description, setup URI, and so on.
-
- ![Create a partner topic](./media/partner-onboarding-how-to/create-partner-registration.png)
-
-1. Create one or more partner namespaces in each region where you want to publish events. The Event Grid service provisions a publishing endpoint (for example, `https://contoso.westus-1.eventgrid.azure.net/api/events`) and access keys.
-
- ![Create a partner namespace](./media/partner-onboarding-how-to/create-partner-namespace.png)
-
-1. Provide a way for customers to register in your system that they want a partner topic.
-1. Contact the Event Grid team to let them know you want your partner topic type to become public.
-
-### Customer flow
-
-1. Your customer visits the Azure portal to note the Azure subscription ID and resource group they want the partner topic created in.
-1. The customer requests a partner topic via your system. In response, you create an event tunnel to your partner namespace.
-1. Event Grid creates a **Pending** partner topic in the customer's Azure subscription and resource group.
-
- ![Create an event channel](./media/partner-onboarding-how-to/create-event-tunnel-partner-topic.png)
-
-1. The customer activates the partner topic via the Azure portal. Events may now flow from your service to the customer's Azure subscription.
-
- ![Activate a partner topic](./media/partner-onboarding-how-to/activate-partner-topic.png)
-
-## Resource model
-The following resource model is for Partner Events.
-
-### Partner registrations
-* Resource: `partnerRegistrations`
-* Used by: Partners
-* Description: Captures the global metadata of the software as a service (SaaS) partner (for example, name, display name, description, setup URI).
-
- Creating or updating a partner registration is a self-serve operation for the partners. This self-serve ability enables partners to build and test the complete end-to-end flow.
-
- Only Microsoft-approved partner registrations are discoverable by customers.
-* Scope: Created in the partner's Azure subscription. Metadata is visible to customers after it's made public.
-
-### Partner namespaces
-* Resource: `partnerNamespaces`
-* Used by: Partners
-* Description: Provides a regional resource for publishing customer events to. Each partner namespace has a publishing endpoint and auth keys. The namespace is also how the partner requests a partner topic for a given customer and lists active customers.
-* Scope: Lives in the partner's subscription.
-
-### Event channel
-* Resource: `partnerNamespaces/eventChannels`
-* Used by: Partners
-* Description: The event channels are a mirror of the customer's partner topic. By creating an event channel and specifying the customer's Azure subscription and resource group in the metadata, you signal to Event Grid to create a partner topic for the customer. Event Grid issues an Azure Resource Manager call to create a corresponding partner topic in the customer's subscription. The partner topic is created in a pending state. There's a one-to-one link between each event channel and partner topic.
-* Scope: Lives in the partner's subscription.
-
-### Partner topics
-* Resource: `partnerTopics`
-* Used by: Customers
-* Description: Partner topics are similar to custom topics and system topics in Event Grid. Each partner topic is associated with a specific source (for example, `Contoso:myaccount`) and a specific partner topic type (for example, Contoso). Customers create event subscriptions on the partner topic to route events to various event handlers.
-
- Customers can't directly create this resource. The only way to create a partner topic is through a partner operation that creates an event channel.
-* Scope: Lives in the customer's subscription.
-
-### Partner topic types
-* Resource: `partnerTopicTypes`
-* Used by: Customers
-* Description: Partner topic types are tenant-wide resource types that enable customers to discover the list of approved partner topic types. The URL looks like https://management.azure.com/providers/Microsoft.EventGrid/partnerTopicTypes)
-* Scope: Global
-
-## Publish events to Event Grid
-When you create a partner namespace in an Azure region, you get a regional endpoint and corresponding auth keys. Publish batches of events to this endpoint for all customer event channels in that namespace. Based on the source field in the event, Azure Event Grid maps each event with the corresponding partner topics.
-
-### Event schema: CloudEvents v1.0
-Publish events to Azure Event Grid by using the CloudEvents 1.0 schema. Event Grid supports both structured mode and batched mode. CloudEvents 1.0 is the only supported event schema for partner namespaces.
-
-### Example flow
-
-1. The publishing service does an HTTP POST to `https://contoso.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01`.
-1. In the request, include a header value named aeg-sas-key that contains a key for authentication. This key is provisioned during the creation of the partner namespace. For example, a valid header value is aeg-sas-key: VXbGWce53249Mt8wuotr0GPmyJ/nDT4hgdEj9DpBeRr38arnnm5OFg==.
-1. Set the Content-Type header to "application/cloudevents-batch+json; charset=UTF-8a".
-1. Run an HTTP POST query to the publishing URL with a batch of events that correspond to that region. For example:
-
-``` json
-[
-{
- "specversion" : "1.0-rc1",
- "type" : "com.contoso.ticketcreated",
- "source" : " com.contoso.account1",
- "subject" : "tickets/123",
- "id" : "A234-1234-1234",
- "time" : "2019-04-05T17:31:00Z",
- "comexampleextension1" : "value",
- "comexampleothervalue" : 5,
- "datacontenttype" : "application/json",
- "data" : {
- object-unique-to-each-publisher
- }
-},
-{
- "specversion" : "1.0-rc1",
- "type" : "com.contoso.ticketclosed",
- "source" : "https://contoso.com/account2",
- "subject" : "tickets/456",
- "id" : "A234-1234-1234",
- "time" : "2019-04-05T17:31:00Z",
- "comexampleextension1" : "value",
- "comexampleothervalue" : 5,
- "datacontenttype" : "application/json",
- "data" : {
- object-unique-to-each-publisher
- }
-}
-]
-```
-
-After posting to the partner namespace endpoint, you receive a response. The response is a standard HTTP response code. Some common responses are:
-
-| Result | Response |
-||--|
-| Success | 200 OK |
-| Event data has incorrect format | 400 Bad Request |
-| Invalid access key | 401 Unauthorized |
-| Incorrect endpoint | 404 Not Found |
-| Array or event exceeds size limits | 413 Payload Too Large |
-
-## References
-
- * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/master/specification/eventgrid/resource-manager/Microsoft.EventGrid/preview/2020-04-01-preview/EventGrid.json)
- * [ARM template](/azure/templates/microsoft.eventgrid/allversions)
- * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json)
- * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
- * [CLI extension](/cli/azure/eventgrid)
-
-### SDKs
- * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
- * [Python](https://pypi.org/project/azure-mgmt-eventgrid/3.0.0rc6/)
- * [Java](https://search.maven.org/artifact/com.microsoft.azure.eventgrid.v2020_04_01_preview/azure-mgmt-eventgrid/1.0.0-beta-3/jar)
- * [Ruby](https://rubygems.org/gems/azure_mgmt_event_grid/versions/0.19.0)
- * [JS](https://www.npmjs.com/package/@azure/arm-eventgrid/v/7.0.0)
- * [Go](https://github.com/Azure/azure-sdk-for-go)
--
-## Next steps
-- [Partner topics overview](partner-events-overview.md)-- [Partner topics onboarding form](https://aka.ms/gridpartnerform)-- [Auth0 partner topic](auth0-overview.md)-- [How to use the Auth0 partner topic](auth0-how-to.md)
event-grid Subscribe Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-through-portal.md
To create an Event Grid subscription for any of the supported [event sources](ov
1. Provide additional details about the event subscription, such as the endpoint for handling events and a subscription name. ![Screenshot that shows the "Endpoint Details" and "Event Subscription Details" sections with a subscription name value entered.](./media/subscribe-through-portal/provide-subscription-details.png)-
+
+ > [!NOTE]
+ > For a list of supported event handlers, see [Event handlers](event-handlers.md).
1. To enable dead lettering and customize retry policies, select **Additional Features**. ![Select additional features](./media/subscribe-through-portal/select-additional-features.png)
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
+
+ Title: Azure Event Grid - Subscribe to partner events
+description: This article explains how to subscribe to events from a partner using Azure Event Grid.
+ Last updated : 03/31/2022++
+# Subscribe to events published by a partner with Azure Event Grid
+This article describes steps to subscribe to events that originate in a system owned or managed by a partner (SaaS, ERP, etc.).
+
+> [!IMPORTANT]
+>If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md) to understand the rationale of the steps in this article.
++
+## High-level steps
+
+Here are the steps that a subscriber needs to perform to receive events from a partner.
+
+1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
+2. [Authorize partner](#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
+3. [Request partner to enable events flow to a partner topic](#request-partner-to-enable-events-flow-to-a-partner-topic).
+4. [Activate partner topic](#activate-a-partner-topic) so that your events start flowing to your partner topic.
+5. [Subscribe to events](#subscribe-to-events).
++
+## Authorize partner to create a partner topic
+
+You must grant your consent to the partner to create partner topics in a resource group that you designate. This authorization has an expiration time. It's effective for the time period you specify between 1 to 365 days.
+
+> [!IMPORTANT]
+> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic.
+
+> [!NOTE]
+> At the time of the release of this feature on March 31st, 2022, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try it using in non-production Azure subscriptions before it's a mandatory step by around June, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+
+Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search bar at the top, enter **Partner Configurations**, and select **Event Grid Partner Configurations** under **Services** in the results.
+1. On the **Event Grid Partner Configurations** page, select **Create Event Grid partner configuration** button on the page (or) select **+ Create** on the command bar.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/partner-configurations.png" alt-text="Event Grid Partner Configurations page showing the list of partner configurations and the link to create a partner registration.":::
+1. On the **Create Partner Configuration** page, do the following steps:
+ 1. In the **Project Details** section, select the **Azure subscription** and the **resource group** where you want to allow the partner to create a partner topic or partner destination.
+ 1. In the **Partner Authorizations** section, specify a default expiration time for partner authorizations defined in this configuration.
+ 1. To provide your authorization for a partner to create partner topics or partner destinations in the specified resource group, select **+ Partner Authorization** link.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/partner-authorization-configuration.png" alt-text="Create Partner Configuration page with the Partner Authorization link selected.":::
+
+1. On the **Add partner authorization to create resources** page, you see a list of **verified partners**. A verified partner is a partner whose identity has been validated by Microsoft. You can select a verified partner, and select **Add** button at the bottom to give the partner the authorization to add a partner topic or a partner destination in your resource group. This authorization is effective up to the expiration time.
+
+ You also have an option to authorize a **non-verified partner.** Unless the partner is an entity that you know well, for example, an organization within your company, it's strongly encouraged that you only work with verified partners. If the partner isn't yet verified, encourage them to get verified by asking them to contact the Event Grid team at askgrid@microsoft.com.
+
+ 1. To authorize a **verified partner**:
+ 1. Select the partner from the list.
+ 1. Specify **authorization expiration time**.
+ 1. select **Add**.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/add-verified-partner.png" alt-text="Screenshot for granting a verified partner the authorization to create resources in your resource group.":::
+ 1. To authorize a non-verified partner, select **Authorize non-verified partner**, and follow these steps:
+ 1. Enter the **partner registration ID**. You need to ask your partner for this ID.
+ 1. Specify authorization expiration time.
+ 1. Select **Add**.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
+1. Back on the **Create Partner Configuration** page, verify that the partner is added to the partner authorization list at the bottom.
+1. Select **Review + create** at the bottom of the page.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/create-partner-registration.png" alt-text="Create Partner Configuration page showing the partner authorization you just added.":::
+1. On the **Review** page, review all settings, and then select **Create** to create the partner registration.
++
+## Request partner to enable events flow to a partner topic
+
+Here's the list of partners and a link to submit a request to enable events flow to a partner topic.
+
+- [Auth0](auth0-how-to.md)
++
+## Activate a partner topic
+
+1. In the search bar of the Azure portal, search for and select **Event Grid Partner Topics**.
+1. On the **Event Grid Partner Topics** page, select the partner topic in the list.
+
+ :::image type="content" source="./media/onboard-partner/select-partner-topic.png" lightbox="./media/onboard-partner/select-partner-topic.png" alt-text="Select a partner topic in the Event Grid Partner Topics page.":::
+1. Review the activate message, and select **Activate** on the page or on the command bar to activate the partner topic before the expiration time mentioned on the page.
+
+ :::image type="content" source="./media/onboard-partner/activate-partner-topic-button.png" lightbox="./media/onboard-partner/activate-partner-topic-button.png" alt-text="Image showing the selection of the Activate button on the command bar or on the page.":::
+1. Confirm that the activation status is set to **Activated** and then create event subscriptions for the partner topic by selecting **+ Event Subscription** on the command bar.
+
+ :::image type="content" source="./media/onboard-partner/partner-topic-activation-status.png" lightbox="./media/onboard-partner/partner-topic-activation-status.png" alt-text="Image showing the activation state as **Activated**.":::
+
+## Subscribe to events
+First, create an event handler that will handle events from the partner. For example, create an event hub, Service Bus queue or topic, or an Azure function.
+
+Then, create an event subscription for the partner topic using the event handler you created.
+
+#### Create an event handler
+To test your partner topic, you'll need an event handler. Go to your Azure subscription and spin up a service that's supported as an [event handler](event-handlers.md) such as an [Azure Function](custom-event-to-function.md). For an example, see [Event Grid Viewer sample](custom-event-quickstart-portal.md#create-a-message-endpoint) that you can use as an event handler via webhooks.
+
+#### Subscribe to the partner topic
+Subscribing to the partner topic tells Event Grid where you want your partner events to be delivered.
+
+1. In the Azure portal, type **Event Grid Partner Topics** in the search box, and select **Event Grid Partner Topics**.
+1. On the **Event Grid Partner Topics** page, select the partner topic in the list.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/select-partner-topic.png" lightbox="./media/subscribe-to-partner-events/select-partner-topic.png" alt-text="Image showing the selection of a partner topic.":::
+1. On the **Event Grid Partner Topic** page for the partner topic, select **+ Event Subscription** on the command bar.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/select-add-event-subscription.png" alt-text="Image showing the selection of Add Event Subscription button on the Event Grid Partner Topic page.":::
+1. On the **Create Event Subscription** page, do the following steps:
+ 1. Enter a **name** for the event subscription.
+ 1. For **Filter to Event Types**, select types of events that your subscription will receive.
+ 1. For **Endpoint Type**, select an Azure service (Azure Function, Storage Queues, Event Hubs, Service Bus Queue, Service Bus Topic, Hybrid Connections. etc.), Web Hook, or Partner Destination.
+ 1. Click the **Select an endpoint** link. In this example, let's use Azure Event Hubs destination or endpoint.
+ 1. On the **Select Event Hub** page, select configurations for the endpoint, and then select **Confirm Selection**.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/select-endpoint.png" lightbox="./media/subscribe-to-partner-events/select-endpoint.png" alt-text="Image showing the configuration of an endpoint for an event subscription.":::
+ 1. Now on the **Create Event Subscription** page, select **Create**.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/create-event-subscription.png" alt-text="Image showing the Create Event Subscription page with example configurations.":::
+
++
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
Title: What's new? Azure Event Grid description: Learn what is new with Azure Event Grid, such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes. Previously updated : 01/13/2022 Last updated : 03/31/2022 # What's new in Azure Event Grid?
Last updated 01/13/2022
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
+## REST API version 2021-10
+This release corresponds to REST API version 2021-10-15-preview, which includes the following features:
+
+- Updates to the Partner Events feature. See the following articles:
+ - [Partner Events overview for customers](partner-events-overview.md)
+ - [Partner Events overview for partners](partner-events-overview-for-partners.md)
+ - [Onboard as an Event Grid partner](onboard-partner.md)
+ - [Subscribe to partner events](subscribe-to-partner-events.md)
+ - [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
+- New REST API
+ - [Channels](/rest/api/eventgrid/controlplane-version2021-10-15-preview/channels)
+ - [Partner Configurations](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-configurations)
+ - [Partner Destinations](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-destinations)
+ - [Verified Partners](/rest/api/eventgrid/controlplane-version2021-10-15-preview/verified-partners)
+++ ## .NET 6.2.0-preview (REST API version 2021-06) This release corresponds to REST API version 2021-06-01-preview, which includes the following new features:
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-about.md
Event Hubs contains the following [key components](event-hubs-features.md):
The following figure shows the Event Hubs stream processing architecture:
-![Event Hubs](./media/event-hubs-about/event_hubs_architecture.svg)
+![Event Hubs](./media/event-hubs-about/event_hubs_architecture.png)
## Next steps
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
The use of Private Link to connect to an HDInsight cluster is an optional featur
When `privateLink` is set to *enabled*, internal [standard load balancers](../load-balancer/load-balancer-overview.md) (SLBs) are created, and an Azure Private Link service is provisioned for each SLB. The Private Link service is what allows you to access the HDInsight cluster from private endpoints.
-## Private Link Deployment Steps
+## Private link deployment steps
Successfully creating a Private Link cluster takes many steps, so we have outlined them here. Follow each of the steps below to ensure everything is setup correctly.
-* Step 1: Create prerequisites
-* Step 2: Configure HDInsight subnet
-* Step 3: Deploy NAT gateway OR firewall
-* Step 4: Deploy Private Link cluster
-* Step 5: Create private endpoints
-* Step 6: Configure DNS
-* Step 7: Check cluster connectivity
-* Appendix: Manage private endpoints for Azure HDInsight
+### [Step 1: Create prerequisites](#Createpreqs)
+### [Step 2: Configure HDInsight subnet](#DisableNetworkPolicy)
+### [Step 3: Deploy NAT gateway or firewall](#NATorFirewall)
+### [Step 4: Deploy private link cluster](#deployCluster)
+### [Step 5: Create private endpoints](#PrivateEndpoints)
+### [Step 6: Configure DNS to connect over private endpoints](#ConfigureDNS)
+### [Step 7: Check cluster connectivity](#CheckConnectivity)
+### [Appendix: Manage private endpoints for HDInsight](#ManageEndpoints)
-## <a name="Createpreqs"></a>Step 1: Create Prerequisites
+## <a name="Createpreqs"></a>Step 1: Create prerequisites
To start, deploy the following resources if you have not created them already. Once this is done you should have at least 1 resource group, 2 virtual networks, and a network security group to attach to the subnet where the HDInsight cluster will be deployed as shown below.
To start, deploy the following resources if you have not created them already. O
> The network security group (NSG) can simply be deployed, we do not need to modify any NSG rules for cluster deployment.
-## <a name="DisableNetworkPolicy"></a>Step 2: Configure HDInsight Subnet
+## <a name="DisableNetworkPolicy"></a>Step 2: Configure HDInsight subnet
In order to choose a source IP address for your Private Link service, an explicit disable setting ```privateLinkServiceNetworkPolicies``` is required on the subnet. Follow the instructions here to [disable network policies for Private Link services](../private-link/disable-private-link-service-network-policy.md).
-## <a name="NATorFirewall"></a>Step 3: Deploy NAT Gateway *OR* Firewall
+## <a name="NATorFirewall"></a>Step 3: Deploy NAT gateway *or* firewall
Standard load balancers don't automatically provide [public outbound NAT](../load-balancer/load-balancer-outbound-connections.md) as basic load balancers do. Since Private Link clusters use standard load balancers, you must provide your own NAT solution, such as a NAT gateway or a NAT provided by your [firewall](./hdinsight-restrict-outbound-traffic.md), to connect to outbound, public HDInsight dependencies.
-### Deploy a NAT Gateway (Option 1)
+### Deploy a NAT gateway (Option 1)
You can opt to use a NAT gateway if you don't want to configure a firewall or a network virtual appliance (NVA) for NAT. To get started, add a NAT gateway (with a new public IP address in your virtual network) to the configured subnet of your virtual network. This gateway is responsible for translating your private internal IP address to public addresses when traffic needs to go outside your virtual network. For a basic setup to get started:
For a basic setup to get started:
Your HDInsight cluster still needs access to its outbound dependencies. If these outbound dependencies are not allowed, cluster creation might fail. For more information on setting up a firewall, see [Control network traffic in Azure HDInsight](./control-network-traffic.md).
-## <a name="deployCluster"></a>Step 4: Deploy Private Link cluster
+## <a name="deployCluster"></a>Step 4: Deploy private link cluster
At this point all prerequisites should be taken care of and you are ready to deploy the Private Link cluster. The following diagram shows an example of the networking configuration that's required before you create the cluster. In this example, all outbound traffic is forced to Azure Firewall through a user-defined route. The required outbound dependencies should be allowed on the firewall before cluster creation. For Enterprise Security Package clusters, virtual network peering can provide the network connectivity to Azure Active Directory Domain Services.
To create a cluster by using PowerShell, see the [example](/powershell/module/az
To create a cluster by using the Azure CLI, see the [example](/cli/azure/hdinsight#az-hdinsight-create-examples).
-## <a name="PrivateEndpoints"></a>Step 5: Create Private Endpoints
+## <a name="PrivateEndpoints"></a>Step 5: Create private endpoints
Azure automatically creates a Private link service for the Ambari and SSH load balancers during the Private Link cluster deployment. After the cluster is deployed, you have to create two Private endpoints on the client VNET(s), one for Ambari and one for SSH access. Then, link them to the Private link services which were created as part of the cluster deployment.
-To create the Private Endpoints:
+To create the private endpoints:
1. Open the Azure portal and search for 'Private link'. 2. In the results, click the Private link icon. 3. Click 'Create private endpoint' and use the following configurations to setup the Ambari private endpoint:
To test ssh access: <br>
2. In the terminal window, try connecting to your cluster with SSH: `ssh sshuser@<clustername>.azurehdinsight.net` (Replace "sshuser" with the ssh user you created for your cluster) 3. If you are able to connect, the configuration is correct for SSH access.
-## <a name="ManageEndpoints"></a>Manage Private endpoints for Azure HDInsight
+## <a name="ManageEndpoints"></a>Manage private endpoints for HDInsight
You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure HDInsight clusters to allow clients on a virtual network to securely access your cluster over [Private Link](../private-link/private-link-overview.md). Network traffic between the clients on the virtual network and the HDInsight cluster traverses over the Microsoft backbone network, eliminating exposure from the public internet.
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
Create an application rule collection that allows the cluster to send and receiv
| Rule_2 | * | https:443 | login.windows.net | Allows Windows login activity | | Rule_3 | * | https:443 | login.microsoftonline.com | Allows Windows login activity | | Rule_4 | * | https:443 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. Make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you are using Private endpoint to access storage accounts, this step is not needed and storage traffic is not forwarded to the firewall.|
- | Rule_5 | * | https:443 | azure.archive.ubuntu.com | Allows Ubuntu security updates to be installed on the cluster |
+ | Rule_5 | * | http:80 | azure.archive.ubuntu.com | Allows Ubuntu security updates to be installed on the cluster |
:::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-app-rule-collection-details.png" alt-text="Title: Enter application rule collection details":::
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Title: Access Azure Health Data Services description: This article describes the different ways to access Azure Health Data Services in your applications using tools and programming languages. -+ Last updated 03/22/2022-+ # Access Azure Health Data Services
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Title: Autoscale for Azure API for FHIR description: This article describes the autoscale feature for Azure API for FHIR.-+ Last updated 02/15/2022-+ # Autoscale for Azure API for FHIR
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Title: 'Quickstart: Deploy Azure API for FHIR using an ARM template' description: In this quickstart, learn how to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®), by using an Azure Resource Manager template (ARM template).-+ -+ Last updated 02/15/2022
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Title: Configure Azure role-based access control (Azure RBAC) for Azure API for FHIR description: This article describes how to configure Azure RBAC for the Azure API for FHIR data plane-+ Last updated 02/15/2022-+ # Configure Azure RBAC for FHIR
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR.--++ Last updated 02/15/2022
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Title: Configure database settings in Azure API for FHIR description: This article describes how to configure Database settings in Azure API for FHIR-+ Last updated 02/15/2022-+ # Configure database settings
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Title: Configure local role-based access control (local RBAC) for Azure API for FHIR description: This article describes how to configure the Azure API for FHIR to use a secondary Azure AD tenant for data plane-+ Last updated 02/15/2022-+ ms.devlang: azurecli
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Title: Private link for Azure API for FHIR description: This article describes how to set up a private endpoint for Azure API for FHIR services -+ Last updated 02/15/2022-+ # Configure private link
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Last updated 02/15/2022-+ # Disaster recovery for Azure API for FHIR
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure CLI' description: In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. -+ Last updated 03/21/2022-+
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using Azure portal' description: In this quickstart, you'll learn how to deploy Azure API for FHIR and configure settings using the Azure portal. -+ Last updated 03/21/2022-+
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Title: 'Quickstart: Deploy Azure API for FHIR using PowerShell' description: In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. -+ Last updated 02/15/2022-+
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Title: Find identity object IDs for authentication - Azure API for FHIR description: This article explains how to locate the identity object IDs needed to configure authentication for Azure API for FHIR -+ Last updated 02/15/2022-+ # Find identity object IDs for authentication configuration for Azure API for FHIR
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Title: Get access token using Azure CLI - Azure API for FHIR description: This article explains how to obtain an access token for Azure API for FHIR using the Azure CLI. -+ Last updated 02/15/2022-+ # Get access token for Azure API for FHIR using Azure CLI
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Title: Move Azure API for FHIR instance to a different subscription or resource group description: This article describes how to move Azure an API for FHIR instance -+ Last updated 02/15/2022-+ # Move Azure API for FHIR to a different subscription or resource group
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 03/08/2022--++
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
--++ Last updated 02/15/2022
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
--++ Last updated 02/15/2022
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Title: Grant permissions to users and client applications using CLI and REST API - Azure Health Data Services description: This article describes how to grant permissions to users and client applications using CLI and REST API. -+ Last updated 03/21/2022-+ # Configure Azure RBAC role using Azure CLI and REST API
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Title: Configure Azure RBAC role for FHIR service - Azure Health Data Services description: This article describes how to configure Azure RBAC role for FHIR.-+ Last updated 02/15/2022-+ # Configure Azure RBAC role for Azure Health Data Services
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Title: How to create Azure Health Data Services, workspaces, FHIR and DICOM service, and MedTech service using Azure Bicep description: This document describes how to deploy Azure Health Data Services using Azure Bicep.-+ Last updated 03/24/2022-+
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Title: Get started with the DICOM service - Azure Health Data Services description: This document describes how to get started with the DICOM service in Azure Health Data Services.-+ Last updated 03/22/2022-+
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
Title: Bulk import data into the FHIR service in Azure Health Data Services description: This article describes how to bulk import data to the FHIR service in Azure Health Data Services. -+ Last updated 03/01/2022-+ # Bulk importing data to the FHIR service in Azure Health Data Services
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in FHIR service description: This article describes how to configure cross-origin resource sharing in FHIR service--++ Last updated 03/02/2022
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Title: Autoscale feature for Azure Health Data Services FHIR service description: This article describes the Autoscale feature for Azure Health Data Services FHIR service.-+ Last updated 03/01/2022-+ # FHIR service autoscale
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Title: View and enable diagnostic settings in FHIR service - Azure Health Data Services description: This article describes how to enable diagnostic settings in FHIR service and review some sample queries for audit logs. -+ Last updated 03/01/2022-+ # View and enable diagnostic settings in the FHIR service
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Title: Deploy Azure Health Data Services FHIR service using ARM template description: Learn how to deploy FHIR service by using an Azure Resource Manager template (ARM template)-+ -+ Last updated 03/01/2022
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Title: Get started with FHIR service - Azure Health Data Services description: This document describes how to get started with FHIR service in Azure Health Data Services.-+ Last updated 03/22/2022-+
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Title: Access the Azure Health Data Services FHIR service using Postman description: This article describes how to access Azure Health Data Services FHIR service with Postman. -+ Last updated 03/01/2022-+ # Access using Postman
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-access-token.md
Title: Get access token using Azure CLI or Azure PowerShell description: This article explains how to obtain an access token for Azure Health Data Services using the Azure CLI or Azure PowerShell. -+ Last updated 03/21/2022-+ ms.devlang: azurecli
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Title: Private Link for Azure Health Data Services description: This article describes how to set up a private endpoint for Azure Health Data Services -+ Last updated 03/14/2022-+ # Configure Private Link for Azure Health Data Services
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service - Azure Health Data Services description: This document describes how to get started with the MedTech service in Azure Health Data Services.-+ Last updated 03/21/2022-+
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
Title: Register a client application in Azure AD using CLI and REST API - Azure Health Data Services description: This article describes how to register a client application Azure AD using CLI and REST API. -+ Last updated 02/15/2022-+ # Register a client application using CLI and REST API
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Title: Register a client application in Azure Active Directory for the Azure Health Data Services description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Health Data Services -+ Last updated 03/21/2022-+ # Register a client application in Azure Active Directory
iot-central Concepts Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-private-endpoints.md
+
+ Title: Network security using private endpoints in Azure IoT Central | Microsoft Docs
+description: Use private endpoints to limit and secure device connectivity to your IoT Central application.
++ Last updated : 03/10/2022+++++
+# Network security for IoT Central using private endpoints
+
+The standard IoT Central endpoints for device connectivity are accessible using public URLs. Any device with a valid identity can connect to your IoT Central application from any location.
+
+Use private endpoints to limit and secure device connectivity to your IoT Central application and only allow access through your private virtual network.
+
+Private endpoints use private IP addresses from a virtual network address space to connect your devices privately to your IoT Central application. Network traffic between devices on the virtual network and the IoT platform traverses the virtual network and a private link on the [Microsoft backbone network](/azure/networking/microsoft-global-network), eliminating exposure on the public internet.
+
+To learn more about Azure Virtual Networks, see:
+
+- [Azure Virtual Networks](/azure/virtual-network/virtual-networks-overview)
+- [Azure private endpoints](/azure/private-link/private-endpoint-overview)
+- [Azure private links](/azure/private-link/private-link-overview)
+
+Private endpoints in your IoT Central application enable you to:
+
+- Secure your cluster by configuring the firewall to block all device connections on the public endpoint.
+- Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network.
+- Securely connect devices to IoT Central from on-premises networks that connect to the virtual network by using a [VPN gateway](/azure/vpn-gateway/vpn-gateway-about-vpngateways) or [ExpressRoute](/azure/expressroute) private peering.
+
+The use of private endpoints in IoT Central is appropriate for devices connected to an on-premises network. You shouldn't use private endpoints for devices deployed in a wide-area network such as the internet.
+
+## What is a private endpoint?
+
+A private endpoint is a special network interface for an Azure service in your virtual network that's assigned IP address(es) from the IP address range of your virtual network. Private endpoint provides secure connectivity between your devices on the virtual network and the IoT platform they connect to. The connection between private endpoint and the Azure IoT platform uses a secure private link:
++
+Devices connected to the virtual network can seamlessly connect to the cluster over the private endpoint. The authorization mechanisms are the same ones you'd use to connect to the public endpoints. However, you need to update the DPS connection URL because the global provisioning host `global.azure-devices-provisioning.net` URL doesn't resolve when public network access is disabled for your application.
+
+When you create a private endpoint for cluster in your virtual network, a consent request is sent for approval by the subscription owner. If the user requesting the creation of the private endpoint is also an owner of the subscription, the request is automatically approved. Subscription owners can manage consent requests and private endpoints for the cluster in the Azure portal, under **Private endpoints**.
+
+Each IoT Central application can support multiple private endpoints, each of which can be located in a virtual network in a different region. If you plan to use multiple private endpoints, take extra care to configure your DNS and to plan the size of your virtual network subnets.
+
+## Plan the size of the subnet in your virtual network
+
+The size of the subnet in your virtual network can't be altered once the subnet is created. Therefore, it's important to plan for the size of subnet and allow for future growth.
+
+IoT Central creates multiple customer visible FQDNs as part of a private endpoint deployment. In addition to the FQDN for IoT Central, there are FQDNs for underlying IoT Hub, Event Hubs, and Device Provisioning Service resources.
++
+The IoT Central private endpoint uses multiple IP addresses from your virtual network and subnet. Also, based on application's load profile, IoT Central [autoscales its underlying IoT Hubs](/azure/iot-central/core/concepts-scalability-availability) so the number of IP addresses used by a private endpoint may increase. Plan for this possible increase when you determine the size for the subnet.
+
+Use the following information to help determine the total number of IP addresses required in your subnet:
+
+| Use | Number of IP addresses per private endpoint |
+|--||
+| IoT Central URL | 1 |
+| Underlying IoT hubs | 2-50 |
+| Event Hubs corresponding to IoT hubs | 2-50 |
+| Device Provisioning Service | 1 |
+| Azure reserved addresses | 5 |
+| Total | 11-107 |
+
+To learn more, see the Azure [Azure Virtual Network FAQ](/azure/virtual-network/virtual-networks-faq).
+
+> [!NOTE]
+> The minimum size for the subnet is `/28` (14 usable IP addresses). For use with an IoT Central private endpoint `/24` is recommended, which helps with extreme workloads.
+
+## Next steps
+
+Now that you've learned about using private endpoints to connect device to your application, here's the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Create a private endpoint for Azure IoT Central application](howto-create-private-endpoint.md).
iot-central Howto Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-private-endpoint.md
+
+ Title: Create a private endpoint for IoT Central | Microsoft Docs
+description: Learn how to create and configure a private endpoint for your IoT Central application. A private endpoint lets you securely connect your devices to IoT Central over a private virtual network.
++ Last updated : 03/11/2022++++
+# Administrator
++
+# Create and configure a private endpoint for IoT Central
+
+You can connect your devices to your IoT Central application by using a private endpoint in an Azure Virtual Network.
+
+Private endpoints use private IP addresses from a virtual network address space to connect your devices privately to your IoT Central application. Network traffic between devices on the virtual network and the IoT platform traverses the virtual network and a private link on the [Microsoft backbone network](/azure/networking/microsoft-global-network), eliminating exposure on the public internet. This article shows you how to create a private endpoint for your IoT Central application.
+
+## Prerequisites
+
+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
+
+- An IoT Central application. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
+- A virtual network in your Azure subscription. To learn more, see [Create a virtual network](/azure/virtual-network/quick-create-portal).
+
+## Create a private endpoint
+
+There are several ways to create a private endpoint for IoT Central application:
+
+- [Use the Azure portal to create a private endpoint resource directly](/azure/private-link/create-private-endpoint-portal). Use this option if you don't have access to the IoT Central application that needs the private endpoint.
+- Create private endpoint on an existing IoT Central application
+
+To create a private endpoint on an existing IoT Central application:
+
+1. In the Azure portal, navigate to your application and then select **Networking**.
+
+1. Select the **Private endpoint connections** tab, and then select **+ Private endpoint**.
+
+1. On the **Basics** tab, enter add a name and select a region for your private endpoint. Then select **Next: Resource**.
+
+1. The **Resource** tab is auto-populated for you. Select **Next: Virtual Network**.
+
+1. On the **Virtual Network** tab, select the **Virtual network** and **Subnet** where you want to deploy your private endpoint.
+
+1. On the same tab, in the **Private DNS integration** section, select **Yes** for **Integrate with private DNS zone**. The private DNS resolves all the required endpoints to private IP addresses in your virtual network.
+
+ :::image type="content" source="media/howto-create-private-endpoint/private-dns-integrationΓÇï.png" alt-text="Screenshot from Azure portal that shows private D N S integration.":::
+
+ > [!NOTE]
+ > Because of the autoscale capabilities in IoT Central, you should use the **Private DNS integration** option if at all possible. If for some reason you can't use this option, see [Use a custom DNS server](#use-a-custom-dns-server).
+
+1. Select **Next: Tags**.
+
+1. On the **Tags** tab, configure any tags you require, and then select **Next: Review + Create**.
+
+1. Review the configuration details and then select **Create** to create your private endpoint resource.
++
+### Verify private endpoint creation
+
+When the creation of the private endpoint is complete, you can access it in the Azure portal.
+
+To see all the private endpoints created for your application:
+
+1. In the Azure portal, navigate to your IoT Central application, and then select **Networking**.
+
+2. Select the **Private endpoint connections** tab. The table shows all the private endpoints created for your application.
++
+### Use a custom DNS server
+
+In some situations, you may not be able to integrate with the private DNS zone of the virtual network. For example, you may use your own DNS server or create DNS records using the host files on your virtual machines. This section describes how to get to the DNS zones.
+
+1. Install [chocolatey](https://chocolatey.org/install).
+1. Install ARMClient:
+
+ ```powershell
+ choco install armclient
+ ```
+
+1. Sign in with ARMClient:
+
+ ```powershell
+ armclient login
+ ```
+
+1. Use the following command to get the private DNS zones for your IoT Central application. Replace the placeholders with the details for your IoT Central application:
+
+ ```powershell
+ armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.IoTCentral/IoTApps/<AppName>/privateLinkResources?api-version=2021-11-01-preview
+ ```
+
+1. Check the response. The required DNS zones are in the `requiredZoneNames` array in the response payload:
+
+ ```json
+ {
+ "value": [
+ {
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.IoTCentral/IoTApps/<AppName>/privateLinkResources/iotApp",
+ "name": "ioTApp",
+ "type": "Microsoft.IoTCentral/IoTApps/privateLinkResources",
+ "location": "<the region of your application>",
+ "properties": {
+ "groupId": "iotApp",
+ "requiredMembers":[
+ "<IoTCentral Name>",
+ "<DPS Name>",
+ "<IoTHub1 Name>",
+ "<IoTHub2 Name>",
+ "<EH1 Name>",
+ "<EH2 Name>"],
+ "requiredZoneNames": [
+ "privatelink.azureiotcentral.com",
+ "privatelink.azure-devices.net",
+ "privatelink.servicebus.windows.net",
+ "privatelink.azure-devices-provisioning.net"],
+ "provisioningState": "Succeeded"}
+ }
+ ]
+ }
+ ```
+
+1. In the Azure portal, navigate to your private endpoint, and select **DNS configuration**. On this page, you can find the required information for the IP address mapping to the DNS name.
++
+> [!WARNING]
+> This information lets you populate your custom DNS server with the necessary records. If at all possible, you should integrate with the private DNS Zones of the virtual network and not configure your own custom DNS server. Private endpoints for IoT Central applications differ from other Azure PaaS services. In some situations, such as IoT Central autoscaling, IoT Central scales out the number of IoT Hubs accessible through the private endpoint. If you choose to populate your own custom DNS server, it's your responsibility to update the DNS records whenever IoT Central autoscales, and later remove records when the number of IoT hubs scales in.
+
+## Restrict public access
+
+To restrict public access for your devices to IoT Central, turn off access from public endpoints. After you turn off public access, devices can't connect to IoT Central from public networks and must use a private endpoint:
+
+1. In the Azure portal, navigate to your IoT Central application and then select **Networking**.
+
+1. On the **Public access** tab, select **Disabled** for public network access:
+
+ :::image type="content" source="media/howto-create-private-endpoint/disable-public-network-access.png" alt-text="Screenshot from the Azure portal that shows how to disable public access.":::
+
+1. Optionally, you can define a list of IP addresses/ranges that can connect to the public endpoint of your IoT Central application.
+
+1. Select **Save**.
+
+## Connect to a private endpoint
+
+When you disable public network access for your IoT Central application, your devices won't be able to connect to the Device Provisioning Service (DPS) global endpoint. This happens because the only FQDN for DPS has a direct IP address in your virtual network. The global endpoint is now unreachable.
+
+When you configure a private endpoint for your IoT Central application, the IoT Central service endpoint is updated to reflect the direct DPS endpoint.
+
+Update your device code to use the direct DPS endpoint.
++
+## Best practices
+
+- Don't use private link subdomain URLs to connect your devices to IoT Central. Always use the DPS URL shown in your IoT Central application after you create the private endpoint.
+
+- Using Azure provided private DNS zones for DNS management. Avoid using your own DNS server because you would need to constantly update your DNS configuration to keep up as IoT Central autoscales its resources.
+
+- If you create multiple private endpoints for same IoT Central resource, the DNS Zone may overwrite the FQDNs so you should add them again.
+
+## Limitations
+
+- Currently, private connectivity is only enabled for device connections to the underlying IoT hubs and DPS in the IoT Central application. The IoT Central web UI and APIs continue to work through their public endpoints.
+
+- The private endpoint must be in the same region as your virtual network.
+
+- When you disable public network access:
+
+ - IoT Central simulated devices won't work because they don't have connectivity to your virtual network.
+
+ - The global DPS endpoint (`global.device-provisioning.net`) isn't accessible. Update your device firmware to connect to the direct DPS instance. You can find the direct DPS URL in the **Device connection groups** page in your IoT Central application.
+
+- You can't rename your IoT Central application after you configure a private endpoint.
+
+- You can't move your private endpoint or the IoT Central application to another resource group or subscription.
+
+- Support is limited to IPv4. IPv6 isn't supported.
+
+## Troubleshooting
+
+If you're having trouble connecting to a private endpoint, use the following troubleshooting guidance:
+
+### Check the connection state
+
+Make sure your private endpoint's connection state is set to approved.
+
+1. In the Azure portal, navigate to your application and then select **Networking**.
+2. Select the **Private endpoints connection** tab. Verify that the connection state is **Approved** for your private endpoint.
+
+### Run checks within the virtual network
+
+Use the following checks to investigate connectivity issues from within the same virtual network. Deploy a virtual machine in the same virtual network where you created the private endpoint. Sign in to the virtual machine, to run the following tests.
+
+To make sure that name resolution is working properly, iterate over all the FQDNs in the private endpoint DNS configuration and run the tests using `nslookup`, `Test-NetConnection`, or other similar tools to verify that each DNS matches its corresponding IP address.
+
+In addition, run the following command to verify the DNS name of each FQDN matches with the corresponding IP address.
+
+```bash
+#replace the <...> placeholders with the correct values
+nslookup iotc-….azure-devices.net
+```
+
+The result looks like the following output:
+
+```bash
+#Results in the following output:
+Server:127.0.0.53
+Address:127.0.0.53#53
+
+Non-authoritative answer: xyz.azure-devices.net
+canonical name = xyz.privatelink.azure-devices.net
+Name:xyz.privatelink.azure-devices.net
+Address: 10.1.1.12
+```
+
+If you find an FQDN that doesn't match its corresponding IP address, fix your custom DNS server. If you aren't using a custom DNS server, create a support ticket.
+
+### Check if you have multiple private endpoints
+
+DNS configuration can be overwritten if you create or delete multiple private endpoints for a single IoT Central application:
+
+- In the Azure portal, navigate to the private endpoint resource.
+- In the DNS section, make sure there are entries for all required resources: IoT Hubs, Event Hubs, DPS and IoT Central FQDNs.
+- Verify that the IPs (and IPs for other private endpoints using this DNS zone) are reflected in the A record of the DNS.
+- Remove any A records for IPs from older private endpoints that have already been deleted.
+
+### Other troubleshooting tips
+
+If after trying all these checks you're still experiencing an issue, try the [private endpoint troubleshooting guide](/azure/private-link/troubleshoot-private-endpoint-connectivity).
+
+If all the checks are successful and your devices still can't establish a connection to IoT Central, contact the corporate security team responsible for firewalls and networking in general. Potential reasons for failure include:
+
+- Misconfiguration of your Azure virtual network
+- Misconfiguration of a firewall appliance
+- Misconfiguration of user defined routes in your Azure virtual network
+- A misconfigured proxy between the device and IoT Central resources
+
+## Next steps
+
+Now that you've learned how to create a private endpoint for your application, here's the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Administer your application](howto-administer.md)
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-customize-ui.md
Title: Customize the Azure IoT Central UI | Microsoft Docs
-description: How to customize the theme and help links for your Azure IoT central application
+description: How to customize the theme, text, and help links for your Azure IoT Central application
Previously updated : 12/21/2021 Last updated : 04/01/2022
-#Customer intent: As an administrator, I want to customize the themes and help links within Central so that my company's brand is represented within the app.
+#Customer intent: As an administrator, I want to customize the themes, text, and help links within Central so that my company's brand is represented within the app.
# Customize the Azure IoT Central UI
-This article describes how you can customize the UI of your application by applying custom themes and modifying the help links to point to your own custom help resources.
+This article describes how you can customize the UI of your application by applying custom themes, changing the text, and modifying the help links to point to your own custom help resources.
The following screenshot shows a page using the standard theme:
The following screenshot shows a page using a custom screenshot with the customi
## Create theme
-To create a custom theme, navigate to the **Customize your application** page in the **Administration** section:
+To create a custom theme, navigate to the **Appearance** page in the **Customization** section under **Settings**:
![IoT Central themes](./media/howto-customize-ui/themes.png)
If an administrator creates a custom theme, then operators and other users of yo
To provide custom help information to your operators and other users, you can modify the links on the application **Help** menu.
-To modify the help links, navigate to the **Customize help** page in the **Administration** section:
+To modify the help links, navigate to the **Help links** page in the **Customization** section under **Settings**:
![Customize IoT Central help links](./media/howto-customize-ui/help-links.png)
You can also add new entries to the help menu and remove default entries:
> [!NOTE] > You can always revert back to the default help links on the **Customize help** page.
+## Change application text
+
+To change text labels in the application, navigate to the **Text** page in the **Customization** section under **Settings**.
+
+On this page, you can customize the text of your application for all supported languages. You can change 'Device' related text to any word you prefer using the text customization file. After you upload the file, the application text automatically appears with the updated words. You can make further customizations by editing and overwriting the customization file. You can repeat the process for any language that the IoT Central UI supports.
+
+Following example shows how to change the word `Device` to `Asset` when you view the application in English:
+
+1. Select **Add application text** and select the English language in the dropdown.
+1. Download the default text file. The file contains a JSON definition of the text strings you can change.
+1. Open the file in a text editor and edit the right-hand side strings to replace the word `device` with `asset` as shown in the following example:
+
+ ```json
+ {
+ "Device": {
+ "AllEntities": "All assets",
+ "Approve": {
+ "Confirmation": "Are you sure you want to approve this asset?",
+ "Confirmation_plural": "Are you sure you want to approve these assets?"
+ },
+ "Block": {
+ "Confirmation": "Are you sure you want to block this asset?",
+ "Confirmation_plural": "Are you sure you want to block these assets?"
+ },
+ "ConnectionStatus": {
+ "Connected": "Connected",
+ "ConnectedAt": "Connected {{lastReportedTime}}",
+ "Disconnected": "Disconnected",
+ "DisconnectedAt": "Disconnected {{lastReportedTime}}"
+ },
+ "Create": {
+ "Description": "Create a new asset with the given settings",
+ "ID_HelpText": "Enter a unique identifier this asset will use to connect.",
+ "Instructions": "To create a new asset, select an asset template, a name, and a unique ID. <1>Learn more <1></1></1>",
+ "Name_HelpText": "Enter a user friendly name for this asset. If not specified, this will be the same as the asset ID.",
+ "Simulated_Label": "Simulate this asset?",
+ "Simulated_HelpText": "A simulated asset generates telemetry that enables you to test the behavior of your application before you connect a real asset.",
+ "Title": "Create a new asset",
+ "Unassigned_HelpText": "Choosing this will not assign the new asset to any asset template.",
+ "HardwareId_Label": "Hardware type",
+ "HardwareId_HelpText": "Optionally specify the manufacturer of the asset",
+ "MiddlewareId_Label": "Connectivity solution",
+ "MiddlewareId_HelpText": "Optionally choose what type of connectivity solution is installed on the asset"
+ },
+ "Delete": {
+ "Confirmation": "Are you sure you want to delete this asset?",
+ "Confirmation_plural": "Are you sure you want to delete these assets?",
+ "Title": "Delete asset permanently?",
+ "Title_plural": "Delete assets permanently?"
+ },
+ "Entity": "Asset",
+ "Entity_plural": "Assets",
+ "Import": {
+ "Title": "Import assets from a file",
+ "HelpText": "Choose the organization that can access the assets youΓÇÖre importing, and then choose the file youΓÇÖll use to import. <1>Learn more <1></1></1>",
+ "Action": "Import assets with an org assignment from a chosen file.",
+ "Upload_Action": "Upload a .csv file",
+ "Browse_HelpText": "YouΓÇÖll use a CSV file to import assets. Click ΓÇ£Learn moreΓÇ¥ for samples and formatting guidelines."
+ },
+ "JoinToGateway": "Attach to gateway",
+ "List": {
+ "Description": "Grid displaying list of assets",
+ "Empty": {
+ "Text": "Assets will send data to IoT Central for you to monitor, store, and analyze. <1>Learn more <1></1></1>",
+ "Title": "Create an Asset"
+ }
+ },
+ "Migrate": {
+ "Confirmation": "Migrating selected asset to another template. Select migration target.",
+ "Confirmation_plural": "Migrating selected assets to another template. Select migration target."
+ },
+ "Properties": {
+ "Definition": "Asset template",
+ "DefinitionId": "Asset template ID",
+ "Id": "Asset ID",
+ "Name": "Asset name",
+ "Scope": "Organization",
+ "Simulated": "Simulated",
+ "Status": "Asset status"
+ },
+ "Rename": "Rename asset",
+ "Status": {
+ "Blocked": "Blocked",
+ "Provisioned": "Provisioned",
+ "Registered": "Registered",
+ "Unassociated": "Unassociated",
+ "WaitingForApproval": "Waiting for approval"
+ },
+ "SystemAreas": {
+ "Downstreamassets": "Downstream assets",
+ "Module_plural": "Modules",
+ "Properties": "Properties",
+ "RawData": "Raw data"
+ },
+ "TemplateList": {
+ "Empty": "No definitions found.",
+ "FilterInstructions": "Filter templates"
+ },
+ "Unassigned": "Unassigned",
+ "Unblock": {
+ "Confirmation": "Are you sure you want to unblock this asset?",
+ "Confirmation_plural": "Are you sure you want to unblock these assets?"
+ }
+ }
+ }
+ ```
+
+1. Upload your edited customization file and select **Save** to see your new text in the application:
+
+ :::image type="content" source="media/howto-customize-ui/upload-custom-text.png" alt-text="Screenshot showing how to upload custom text file.":::
+
+ The UI now uses the new text values:
+
+ :::image type="content" source="media/howto-customize-ui/updated-ui-text.png" alt-text="Screenshot that shows updated text in the U I.":::
+
+You can reupload the customization file with further changes by selecting the relevant language from the list on the **Text** page in the **Customization** section.
+ ## Next steps Now that you've learned how to customize the UI in your IoT Central application, here are some suggested next steps:
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards.md
This feature is available on the KPI, LKV, and property tiles. It lets you adjus
:::image type="content" source="media/howto-manage-dashboards/tile-format.png" alt-text="Screenshot that shows the dialog box for tile formatting.":::
+## Pin analytics to dashboard
+
+To continuously monitor the analytics queries, you can pin the query to dashboard. To pin a query to the dashboard:
+
+1. Navigate to **Data explorer** in the left pane and select the query you created.
+1. Select a dashboard from the dropdown menu and select **Pin to dashboard**.
++ ## Next steps Now that you've learned how to create and manage personal dashboards, you can [learn how to manage your application preferences](howto-manage-preferences.md).
iot-dps Iot Dps Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-customer-managed-keys.md
Title: Azure Device Provisioning Service data encryption at rest via customer-managed keys| Microsoft Docs description: Encryption of data at rest with customer-managed keys for Device Provisioning Service - Last updated 02/24/2020 + # Encryption of data at rest with customer-managed keys for Device Provisioning Service
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll configure sample code to use the [Advanced Message Queui
:::zone pivot="programming-language-csharp"
- ![Device is registered with the IoT hub for CSharp](./media/quick-create-simulated-device-tpm/hub-registration-csharp.png)
+ ![Device is registered with the IoT hub for C#](./media/quick-create-simulated-device-tpm/hub-registration-csharp.png)
::: zone-end
iot-hub Iot Hub Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-managed-keys.md
Title: Encryption of Azure IoT Hub data at rest using customer-managed keys| Microsoft Docs description: Encryption of Azure IoT Hub data at rest using customer-managed keys-+ Last updated 07/07/2021-++ # Encryption of Azure Iot Hub data at rest using customer-managed keys
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
Previously updated : 11/03/2021- Last updated : 03/14/2022 # Azure Machine Learning CLI (v2) release notes [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] In this article, learn about Azure Machine Learning CLI (v2) releases.
In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+## 2022-03-14
+
+### Azure Machine Learning CLI (v2) v2.2.0
+
+- `az ml job`
+ - For all job types, flattened the `code` section of the YAML schema. Instead of `code.local_path` to specify the path to the source code directory, it is now just `code`
+ - For all job types, changed the schema for defining data inputs to the job in the job YAML. Instead of specifying the data path using either the `file` or `folder` fields, use the `path` field to specify either a local path, a URI to a cloud path containing the data, or a reference to an existing registered Azure ML data asset via `path: azureml:<data_name>:<data_version>`. Also specify the `type` field to clarify whether the data source is a single file (`uri_file`) or a folder (`uri_folder`). If `type` field is omitted, it defaults to `type: uri_folder`. For more information, see the section of any of the [job YAML references](reference-yaml-job-command.md) that discuss the schema for specifying input data.
+ - In the [sweep job YAML schema](reference-yaml-job-sweep.md), changed the `sampling_algorithm` field from a string to an object in order to support additional configurations for the random sampling algorithm type
+ - Removed the component job YAML schema. With this release, if you want to run a command job inside a pipeline that uses a component, just specify the component to the `component` field of the command job YAML definition.
+ - For all job types, added support for referencing the latest version of a nested asset in the job YAML configuration. When referencing a registered environment or data asset to use as input in a job, you can alias by latest version rather than having to explicitly specify the version. For example: `environment: azureml:AzureML-Minimal@latest`
+ - For pipeline jobs, introduced the `${{ parent }}` context for binding inputs and outputs between steps in a pipeline. For more information, see [Expression syntax for binding inputs and outputs between steps in a pipeline job](reference-yaml-core-syntax.md#binding-inputs-and-outputs-between-steps-in-a-pipeline-job).
+ - Added support for downloading named outputs of job via the `--output-name` argument for the `az ml job download` command
+- `az ml data`
+ - Deprecated the `az ml dataset` subgroup, now using `az ml data` instead
+ - There are two types of data that can now be created, either from a single file source (`type: uri_file`) or a folder (`type: uri_folder`). When creating the data asset, you can either specify the data source from a local file / folder or from a URI to a cloud path location. See the [data YAML schema](reference-yaml-data.md) for the full schema
+- `az ml environment`
+ - In the [environment YAML schema](reference-yaml-environment.md), renamed the `build.local_path` field to `build.path`
+ - Removed the `build.context_uri` field, the URI of the uploaded build context location will be accessible via `build.path` when the environment is returned
+- `az ml model`
+ - In the [model YAML schema](reference-yaml-model.md), `model_uri` and `local_path` fields removed and consolidated to one `path` field that can take either a local path or a cloud path URI. `model_format` field renamed to `type`; the default type is `custom_model`, but you can specify one of the other types (`mlflow_model`, `triton_model`) to use the model in no-code deployment scenarios
+ - For `az ml model create`, `--model-uri` and `--local-path` arguments removed and consolidated to one `--path` argument that can take either a local path or a cloud path URI
+ - Added the `az ml model download` command to download a model's artifact files
+- `az ml online-deployment`
+ - In the [online deployment YAML schema](reference-yaml-deployment-managed-online.md), flattened the `code` section of the `code_configuration` field. Instead of `code_configuration.code.local_path` to specify the path to the source code directory containing the scoring files, it is now just `code_configuration.code`
+ - Added an `environment_variables` field to the online deployment YAML schema to support configuring environment variables for an online deployment
+- `az ml batch-deployment`
+ - In the [batch deployment YAML schema](reference-yaml-deployment-batch.md), flattened the `code` section of the `code_configuration` field. Instead of `code_configuration.code.local_path` to specify the path to the source code directory containing the scoring files, it is now just `code_configuration.code`
+- `az ml component`
+ - Flattened the `code` section of the [command component YAML schema](reference-yaml-component-command.md). Instead of `code.local_path` to specify the path to the source code directory, it is now just `code`
+ - Added support for referencing the latest version of a registered environment to use in the component YAML configuration. When referencing a registered environment, you can alias by latest version rather than having to explicitly specify the version. For example: `environment: azureml:AzureML-Minimal@latest`
+ - Renamed the component input and output type value from `path` to `uri_folder` for the `type` field when defining a component input or output
+- Removed the `delete` commands for assets (model, component, data, environment). The existing delete functionality is only a soft delete, so the `delete` commands will be reintroduced in a later release once hard delete is supported
+- Added support for archiving and restoring assets (model, component, data, environment) and jobs, e.g. `az ml model archive` and `az ml model restore`. You can now archive assets and jobs, which will hide the archived entity from list queries (e.g. `az ml model list`).
+ ## 2021-10-04 ### Azure Machine Learning CLI (v2) v2.0.2
__RSS feed__: Get notified when this page is updated by copying and pasting the
- Added new `model_format` property to Model for no-code deployment scenarios - `az ml dataset` - Renamed `az ml data` subgroup to `az ml dataset`
- - Updated [dataset YAML schema](reference-yaml-dataset.md)
+ - Updated dataset YAML schema
- `az ml component` - Added the `az ml component` commands for managing Azure ML components - Added support for command components ([command component YAML schema](reference-yaml-component-command.md)) - `az ml online-endpoint` - `az ml endpoint` subgroup split into two separate groups: `az ml online-endpoint` and `az ml batch-endpoint`
- - Updated [online endpoint YAML schema](reference-yaml-endpoint-managed-online.md)
+ - Updated [online endpoint YAML schema](reference-yaml-endpoint-online.md)
- Added support for local endpoints for dev/test scenarios - Added interactive VSCode debugging support for local endpoints (added the `--vscode-debug` flag to `az ml batch-endpoint create/update`) - `az ml online-deployment` - `az ml deployment` subgroup split into two separate groups: `az ml online-deployment` and `az ml batch-deployment`
- - Updated [managed online deployment YAML schema](reference-yaml-endpoint-managed-online.md)
+ - Updated [managed online deployment YAML schema](reference-yaml-deployment-managed-online.md)
- Added autoscaling support via integration with Azure Monitor Autoscale - Added support for updating multiple online deployment properties in the same update operation - Added support for performing concurrent operations on deployments under the same endpoint
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
-+ Previously updated : 12/22/2021 Last updated : 03/31/2022 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it. # What are Azure Machine Learning endpoints (preview)? [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] Use Azure Machine Learning endpoints (preview) to streamline model deployments for both real-time and batch inference deployments. Endpoints provide a unified interface to invoke and manage model deployments across compute types.
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## April 04, 2022
+New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
+
+Version: 22.04.01
+
+Main changes:
+
+- Updated R environment - added libraries: Cluster, Devtools Factoextra, GlueHere, Ottr, Paletteer, Patchwork, Plotly, Rmd2jupyter, Scales, Statip, Summarytools, Tidyverse, Tidymodels and Testthat
+- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved log4j version 2.0 jars.
+- Azure CLI to version 2.33.1
+- Fixed jupyterhub access issue using public ip address
+- Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
+ - `azureml_py38_PT_TF`: additional azureml_py38 environment, preinstalled with latest TensorFlow and PyTorch
+ - `py38_default`: default system environment based on Python 3.8
+ - We have removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
+
+ ## March 18, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
--++ Last updated 05/08/2020
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
-+ Previously updated : 01/11/2022 Last updated : 03/31/2022
# Access Azure resources from an online endpoint (preview) with a managed identity [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
This guide assumes you don't have a managed identity, a storage account or an on
## Define configuration YAML file for deployment
-To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md) document.
+To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document.
The YAML files in the following examples are used to create online endpoints.
The following YAML example is located at `endpoints/online/managed/managed-ident
* Defines the name by which you want to refer to the endpoint, `my-sai-endpoint`. * Specifies the type of authorization to use to access the endpoint, `auth-mode: key`. This YAML example, `2-sai-deployment.yml`,
This YAML example, `2-sai-deployment.yml`,
* Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use. # [User-assigned managed identity](#tab/user-identity)
The following YAML example is located at `endpoints/online/managed/managed-ident
* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`. * Indicates the identity type to use, `type: user_assigned` This YAML example, `2-sai-deployment.yml`,
This YAML example, `2-sai-deployment.yml`,
* Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
Configure the variable names for the workspace, workspace location, and the endp
The following code exports these values as environment variables in your endpoint: Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in `az storage account create` and `az storage container create` commands in the next section. The following code exports those values as environment variables: After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
After these variables are exported, create a text file locally. When the endpoin
Decide on the name of your endpoint, workspace, workspace location and export that value as an environment variable: Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in `az storage account create` and `az storage container create` commands in the next section. After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the user-assigned managed identity used in the endpoint. Decide on the name of your user identity name, and export that value as an environment variable:
When you [create an online endpoint](#create-an-online-endpoint), a system-assig
To create a user-assigned managed identity, use the following:
This is the storage account and blob container that you'll give the online endpo
First, create a storage account. Next, create the blob container in the storage account. Then, upload your text file to the blob container. # [User-assigned managed identity](#tab/user-identity) First, create a storage account. You can also retrieve an existing storage account ID with the following. Next, create the blob container in the storage account. Then, upload file in container.
When you create an online endpoint, a system-assigned managed identity is create
>[!IMPORTANT] > System assigned managed identities are immutable and can't be changed once created. Check the status of the endpoint with the following. If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md). # [User-assigned managed identity](#tab/user-identity) Check the status of the endpoint with the following. If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md).
You can allow the online endpoint permission to access your storage via its syst
Retrieve the system-assigned managed identity that was created for your endpoint. From here, you can give the system-assigned managed identity permission to access your storage. # [User-assigned managed identity](#tab/user-identity) Retrieve user-assigned managed identity client ID. Retrieve the user-assigned managed identity ID. Get the container registry associated with workspace. Retrieve the default storage of the workspace. Give permission of storage account to the user-assigned managed identity. Give permission of container registry to user assigned managed identity. Give permission of default workspace storage to user-assigned managed identity.
Give permission of default workspace storage to user-assigned managed identity.
Refer to the following script to understand how to use your identity token to access Azure resources, in this scenario, the storage account created in previous sections. ## Create a deployment with your configuration
Create a deployment that's associated with the online endpoint. [Learn more abou
# [System-assigned managed identity](#tab/system-identity) >[!NOTE] > The value of the `--name` argument may override the `name` key inside the YAML file. Check the status of the deployment. To refine the above query to only return specific data, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
To refine the above query to only return specific data, see [Query Azure CLI com
To check the init method output, see the deployment log with the following code. # [User-assigned managed identity](#tab/user-identity) >[!Note] > The value of the `--name` argument may override the `name` key inside the YAML file. Once the command executes, you can check the status of the deployment. To refine the above query to only return specific data, see [Query Azure CLI command output](/cli/azure/query-azure-cli). > [!NOTE] > The init method in the scoring script reads the file from your storage account using the system assigned managed identity token. To check the init method output, see the deployment log with the following code.
When your deployment completes, the model, the environment, and the endpoint ar
Once your online endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like: To call your endpoint, run: # [System-assigned managed identity](#tab/system-identity) # [User-assigned managed identity](#tab/user-identity)
If you don't plan to continue using the deployed online endpoint and storage, de
# [System-assigned managed identity](#tab/system-identity) # [User-assigned managed identity](#tab/user-identity)
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
- Previously updated : 03/29/2022-- Last updated : 03/31/2022++ # Install and set up the CLI (v2) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
Previously updated : 01/07/2022 Last updated : 03/31/2022 ms.devlang: azurecli, cliv2
ms.devlang: azurecli, cliv2
# Create and run machine learning pipelines using components with the Azure Machine Learning CLI (Preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and Components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can [create pipelines without using components](how-to-train-cli.md#build-a-training-pipeline), but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI.
You should receive a JSON dictionary with information about the pipeline job, in
Open `ComponentA.yaml` to see how the first component is defined: In the current preview, only components of type `command` are supported. The `name` is the unique identifier and used in Studio to describe the component, and `display_name` is used for a display-friendly name. The `version` key-value pair allows you to evolve your pipeline components while maintaining reproducibility with older versions.
-All files in the `code.local_path` value will be uploaded to Azure for processing.
+All files in the `./componentA_src` directory will be uploaded to Azure for processing.
The `environment` section allows you to specify the software environment in which the component runs. In this case, the component uses a base Docker image, as specified in `environment.image`. For more, see [Create & use software environments in Azure Machine Learning](how-to-use-environments.md).
For more information on components and their specification, see [What is an Azur
In the example directory, the `pipeline.yaml` file looks like the following code: If you open the job's URL in Studio (the value of `services.Studio.endpoint` from the `job create` command when creating a job or `job show` after the job has been created), you'll see a graph representation of your pipeline:
You define input data directories for your pipeline in the pipeline YAML file us
:::image type="content" source="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png" alt-text="Image showing how the inputs and outputs paths map to the jobs inputs and outputs paths" lightbox="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png":::
-1. The `inputs.pipeline_sample_input_data` path (line 6) creates a key identifier and uploads the input data from the `local_path` directory (line 8). This identifier `${{inputs.pipeline_sample_input_data}}` is then used as the value of the `jobs.componentA_job.inputs.componentA_input` key (line 19). In other words, the pipeline's `pipeline_sample_input_data` input is passed to the `componentA_input` input of Component A.
-1. The `jobs.componentA_job.outputs.componentA_output` path (line 21) is used with the identifier `${{jobs.componentA_job.outputs.componentA_output}}` as the value for the next step's `jobs.componentB_job.inputs.componentB_input` key (line 27).
-1. As with Component A, the output of Component B (line 29) is used as the input to Component C (line 35).
-1. The pipeline's `outputs.final_pipeline_output` key (line 11) is the source of the identifier used as the value for the `jobs.componentC_job.outputs.componentC_output` key (line 37). In other words, Component C's output is the pipeline's final output.
+1. The `parent.inputs.pipeline_sample_input_data` path (line 7) creates a key identifier and uploads the input data from the `path` directory (line 9). This identifier `${{parent.inputs.pipeline_sample_input_data}}` is then used as the value of the `parent.jobs.componentA_job.inputs.componentA_input` key (line 20). In other words, the pipeline's `pipeline_sample_input_data` input is passed to the `componentA_input` input of Component A.
+1. The `parent.jobs.componentA_job.outputs.componentA_output` path (line 22) is used with the identifier `${{parent.jobs.componentA_job.outputs.componentA_output}}` as the value for the next step's `parent.jobs.componentB_job.inputs.componentB_input` key (line 28).
+1. As with Component A, the output of Component B (line 30) is used as the input to Component C (line 36).
+1. The pipeline's `parent.outputs.final_pipeline_output` key (line 12) is the source of the identifier used as the value for the `parent.jobs.componentC_job.outputs.componentC_output` key (line 38). In other words, Component C's output is the pipeline's final output.
Studio's visualization of this pipeline looks like this: :::image type="content" source="media/how-to-create-component-pipelines-cli/pipeline-graph-dependencies.png" alt-text="Screenshot showing Studio's graph view of a pipeline with data dependencies" lightbox="media/how-to-create-component-pipelines-cli/pipeline-graph-dependencies.png":::
-You can see that `inputs.pipeline_sample_input_data` is represented as a `Dataset`. The keys of the `jobs.<COMPONENT_NAME>.inputs` and `outputs` paths are shown as data flows between the pipeline components.
+You can see that `parent.inputs.pipeline_sample_input_data` is represented as a `Dataset`. The keys of the `jobs.<COMPONENT_NAME>.inputs` and `outputs` paths are shown as data flows between the pipeline components.
You can run this example by switching to the `3b_pipeline_with_data` subdirectory of the samples repository and running:
One of the common scenarios for machine learning pipelines has three major phase
Each of these phases may have multiple components. For instance, the data preparation step may have separate steps for loading and transforming the training data. The examples repository contains an end-to-end example pipeline in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` directory.
-The `job.yml` begins with the mandatory `type: pipeline` key-value pair. Then, it defines inputs and outputs as follows:
+The `pipeline.yml` begins with the mandatory `type: pipeline` key-value pair. Then, it defines inputs and outputs as follows:
As described previously, these entries specify the input data to the pipeline, in this case the dataset in `./data`, and the intermediate and final outputs of the pipeline, which are stored in separate paths. The names within these input and output entries become values in the `inputs` and `outputs` entries of the individual jobs:
-Notice how `jobs.train_job.outputs.model_output` is used as an input to both the prediction job and the scoring job, as shown in the following diagram:
+Notice how `parent.jobs.train-job.outputs.model_output` is used as an input to both the prediction job and the scoring job, as shown in the following diagram:
:::image type="content" source="media/how-to-create-component-pipelines-cli/regression-graph.png" alt-text="pipeline graph of the NYC taxi-fare prediction task" lightbox="media/how-to-create-component-pipelines-cli/regression-graph.png":::
Click on a component. You'll see some basic information about the component, suc
### Use registered components in a job specification file
-In the `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` dictionaries are similar to those already discussed. The only significant difference is the value of the `component` values in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<JOB_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies that version 31 of the registered component `Train` should be used:
+In the `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` dictionaries are similar to those already discussed. The only significant difference is the value of the `command` values in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<JOB_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `Train` should be used:
## Caching & reuse
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
-+ Previously updated : 12/22/2021 Last updated : 03/31/2022 ms.devlang: azurecli
ms.devlang: azurecli
# How to deploy an AutoML model to an online endpoint (preview) + In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md). In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using:
To deploy using these files, you can use either the studio or the Azure CLI.
# [Studio](#tab/Studio)
-1. Go to the Models page in Azure machine learning studio
+1. Go to the Models page in Azure Machine Learning studio
-1. Click on + Register Model option
+1. Select + Register Model option
1. Register the model you downloaded from Automated ML run
To deploy using these files, you can use either the studio or the Azure CLI.
To create a deployment from the CLI, you'll need the Azure CLI with the ML v2 extension. Run the following command to confirm that you've both: If you receive an error message or you don't see `Extensions: ml` in the response, follow the steps at [Install and set up the CLI (v2)](how-to-configure-cli.md).
-Login:
+Sign in:
If you've access to multiple Azure subscriptions, you can set your active subscription: Set the default resource group and workspace to where you wish to create the deployment: ## Put the scoring file in its own directory
To create an online endpoint from the command line, you'll need to create an *en
__automl_endpoint.yml__ __automl_deployment.yml__ You'll need to modify this file to use the files you downloaded from the AutoML Models page.
You'll need to modify this file to use the files you downloaded from the AutoML
| Path | Change to | | | |
- | `model:local_path` | The path to the `model.pkl` file you downloaded. |
- | `code_configuration:code:local_path` | The directory in which you placed the scoring file. |
+ | `model:path` | The path to the `model.pkl` file you downloaded. |
+ | `code_configuration:code:path` | The directory in which you placed the scoring file. |
| `code_configuration:scoring_script` | The name of the Python scoring file (`scoring_file_<VERSION>.py`). | | `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). | > [!NOTE]
- > For a full description of the YAML, see [Managed online endpoints (preview) YAML reference](reference-yaml-endpoint-managed-online.md).
+ > For a full description of the YAML, see [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md).
1. From the command line, run:
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+
Learn how to use the Azure Machine Learning REST API to deploy models for batch scoring (preview). [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
In this article, you learn how to use the new REST APIs to:
> [!NOTE] > Batch endpoint names need to be unique at the Azure region level. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2. ## Azure Machine Learning batch endpoints
In the following REST API calls, we use `SUBSCRIPTION_ID`, `RESOURCE_GROUP`, `LO
Administrative REST requests a [service principal authentication token](how-to-manage-rest.md#retrieve-a-service-principal-authentication-token). Replace `TOKEN` with your own value. You can retrieve this token with the following command: The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. Set the API version as a variable to accommodate future versions: ### Create compute Batch scoring runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster where you can run batch scoring workflows. Create a compute cluster: > [!TIP] > If you want to use an existing compute instead, you must specify the full Azure Resource Manager ID when [creating the batch deployment](#create-batch-deployment). The full ID uses the format `/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes/<your-compute-name>`.
To register the model and code, first they need to be uploaded to a storage acco
You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON result and get the required values. You can also use the Azure portal to find the same information: ### Upload & register code Now that you have the datastore, you can upload the scoring script. Use the Azure Storage CLI to upload a blob into your default container: > [!TIP] > You can also use other methods to upload, such as the Azure portal or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). Once you upload your code, you can specify your code with a PUT request: ### Upload and register model Similar to the code, Upload the model files: Now, register the model: ### Create environment The deployment needs to run in an environment that has the required dependencies. Create the environment with a PUT request. Use a docker image from Microsoft Container Registry. You can configure the docker image with `image` and add conda dependencies with `condaFile`. Run the following code to read the `condaFile` defined in json. The source file is at `/cli/endpoints/batch/mnist/environment/conda.json` in the example repository: Now, run the following snippet to create an environment: ## Deploy with batch endpoints
Next, create the batch endpoint, a deployment, and set the default deployment.
Create the batch endpoint: ### Create batch deployment Create a batch deployment under the endpoint: ### Set the default batch deployment under the endpoint There's only one default batch deployment under one endpoint, which will be used when invoke to run batch scoring job. ## Run batch scoring
Invoking a batch endpoint triggers a batch scoring job. A job `id` is returned i
Get the scoring uri and access token to invoke the batch endpoint. First get the scoring uri: Get the batch endpoint access token: Now, invoke the batch endpoint to start a batch scoring job. The following example scores data publicly available in the cloud: If your data is stored in an Azure Machine Learning registered datastore, you can invoke the batch endpoint with a dataset. The following code creates a new dataset: Next, reference the dataset when invoking the batch endpoint: In the previous code snippet, a custom output location is provided by using `datastoreId`, `path`, and `outputFileName`. These settings allow you to configure where to store the batch scoring results.
In the previous code snippet, a custom output location is provided by using `dat
For this example, the output is stored in the default blob storage for the workspace. The folder name is the same as the endpoint name, and the file name is randomly generated by the following code: ### Check the batch scoring job
Batch scoring jobs usually take some time to process the entire set of inputs. M
> [!TIP] > The example invokes the default deployment of the batch endpoint. To invoke a non-default deployment, use the `azureml-model-deployment` HTTP header and set the value to the deployment name. For example, using a parameter of `--header "azureml-model-deployment: $DEPLOYMENT_NAME"` with curl. ### Check batch scoring results
For information on checking the results, see [Check batch scoring results](how-t
If you aren't going use the batch endpoint, you should delete it with the below command (it deletes the batch endpoint and all the underlying deployments): ## Next steps
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Previously updated : 12/22/2021 Last updated : 03/31/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Deploy a TensorFlow model served with TF Serving using a custom container in an online endpoint (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Learn how to deploy a custom container as an online endpoint in Azure Machine Learning.
cd azureml-examples/cli
Define environment variables: ## Download a TensorFlow model Download and unzip a model that divides an input by two and adds 2 to the result: ## Run a TF Serving image locally to test that it works Use docker to run your image locally for testing: ### Check that you can send liveness and scoring requests to the image First, check that the container is "alive," meaning that the process inside the container is still running. You should get a 200 (OK) response. Then, check that you can get predictions about unlabeled data: ### Stop the image Now that you've tested locally, stop the image: ## Create a YAML file for your endpoint and deployment
You can configure your cloud deployment using YAML. Take a look at the sample YA
__tfserving-endpoint.yml__ __tfserving-deployment.yml__ There are a few important concepts to notice in this YAML:
and `tfserving-deployment.yml` contains:
model: name: tfserving-mounted version: 1
- local_path: ./half_plus_two
+ path: ./half_plus_two
``` then your model will be located under `/var/azureml-app/azureml-models/tfserving-deployment/1` in your deployment:
endpoint_name: tfserving-endpoint
model: name: tfserving-mounted version: 1
- local_path: ./half_plus_two
+ path: ./half_plus_two
model_mount_path: /var/tfserving-model-mount ..... ```
az ml online-deployment create --name tfserving-deployment -f endpoints/online/c
Once your deployment completes, see if you can make a scoring request to the deployed endpoint. ### Delete endpoint and model
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
-+ Previously updated : 12/22/2021 Last updated : 03/31/2022
# Deploy and score a machine learning model by using an online endpoint (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Learn how to use an online endpoint (preview) to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
To set your endpoint name, choose one of the following commands, depending on yo
For Unix, run this command: > [!NOTE] > Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
For Unix, run this command:
The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* file: > [!NOTE]
-> For a full description of the YAML, see [Managed online endpoints (preview) YAML reference](reference-yaml-endpoint-managed-online.md).
+> For a full description of the YAML, see [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md).
-The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
+The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
| Key | Description | | | |
The example contains all the files needed to deploy a model on an online endpoin
The following snippet shows the *endpoints/online/managed/sample/blue-deployment.yml* file, with all the required inputs: The table describes the attributes of a `deployment`: | Key | Description | | | | | `name` | The name of the deployment. |
-| `model` | In this example, we specify the model properties inline: `local_path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
-| `code_configuration.code.local_path` | The directory that contains all the Python source code for scoring the model. You can use nested directories and packages. |
-| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.local_path` scoring directory. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
+| `model` | In this example, we specify the model properties inline: `path`. Model files are automatically uploaded and registered with an autogenerated name. For related best practices, see the tip in the next section. |
+| `code_configuration.code.path` | The directory that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.path` scoring directory. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include the`path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. | | `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). | | `instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `instance_count` to at least `3`. |
-For more information about the YAML schema, see the [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md).
+For more information about the YAML schema, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md).
> [!NOTE] > To use Kubernetes instead of managed endpoints as a compute target:
For more information about the YAML schema, see the [online endpoint YAML refere
### Register your model and environment separately
-In this example, we specify the `local_path` (where to upload files from) inline. The CLI automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the YAML. Use the form `model: azureml:my-model:1` or `environment: azureml:my-env:1`.
+In this example, we specify the `path` (where to upload files from) inline. The CLI automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the YAML. Use the form `model: azureml:my-model:1` or `environment: azureml:my-env:1`.
For registration, you can extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
To save time debugging, we *highly recommend* that you test-run your endpoint lo
First create the endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. This is useful for development and testing purposes. Now, create a deployment named `blue` under the endpoint. The `--local` flag directs the CLI to deploy the endpoint in the Docker environment.
The `--local` flag directs the CLI to deploy the endpoint in the Docker environm
Check the status to see whether the model was deployed without error: The output should appear similar to the following JSON. Note that the `provisioning_state` is `Succeeded`.
The output should appear similar to the following JSON. Note that the `provision
Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters that are stored in a JSON file: If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run `az ml online-endpoint show --local -n $ENDPOINT_NAME`. In the returned data, find the `scoring_uri` attribute. Sample curl based commands are available later in this doc.
If you want to use a REST client (like curl), you must have the scoring URI. To
In the example *score.py* file, the `run()` method logs some output to the console. You can view this output by using the `get-logs` command again: ## Deploy your online endpoint to Azure
Next, deploy your online endpoint to Azure.
To create the endpoint in the cloud, run the following code: To create the deployment named `blue` under the endpoint, run the following code: This deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
This deployment might take up to 15 minutes, depending on whether the underlying
The `show` command contains information in `provisioning_status` for endpoint and deployment: You can list all the endpoints in the workspace in a table format by using the `list` command:
az ml online-endpoint list --output table
Check the logs to see whether the model was deployed without error: By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
By default, logs are pulled from inference-server. To see the logs from storage-
You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data: The following example shows how to get the key used to authenticate to the endpoint: Next, use curl to score data. Notice we use `show` and `get-credentials` commands to get the authentication credentials. Also notice that we're using the `--query` flag to filter attributes to only what we need. To learn more about `--query`, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
To understand how `update` works:
1. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
The `update` command also works with local deployments. Use the same `az ml online-deployment update` command with the `--local` flag.
The `update` command also works with local deployments. Use the same `az ml onli
> The above is an example of inplace rolling update: i.e. the same deployment is updated with the new configuration, with 20% nodes at a time. If the deployment has 10 nodes, 2 nodes at a time will be updated. For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative. ### (Optional) Configure autoscaling
-Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints supports autoscaling through integration with the Azure monitor autoscale feature. To configure autoscaling, see [How to autoscale online endpoints](how-to-autoscale-endpoints.md).
+Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints support autoscaling through integration with the Azure monitor autoscale feature. To configure autoscaling, see [How to autoscale online endpoints](how-to-autoscale-endpoints.md).
### (Optional) Monitor SLA by using Azure Monitor
The logs might take up to an hour to connect. After an hour, send some scoring r
If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments): ## Next steps
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Previously updated : 12/21/2021 Last updated : 03/31/2022
ms.devlang: azurecli
# Deploy MLflow models to online endpoints (preview)
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) (preview). When you deploy your MLflow model to an online endpoint, it's a no-code-deployment. It doesn't require scoring script and environment.
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) (preview). When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.
+
+You only provide the typical MLflow model folder contents:
+
+* MLmodel file
+* `conda.yaml`
+* model file(s)
+
+For no-code-deployment, Azure Machine Learning
+
+* Dynamically installs Python packages provided in the `conda.yaml` file, this means the dependencies are installed during container runtime.
+ * The base container image/curated environment used for dynamic installation is `mcr.microsoft.com/azureml/mlflow-ubuntu18.04-py37-cpu-inference` or `AzureML-mlflow-ubuntu18.04-py37-cpu-inference`
+
+Provides a MLflow base image/curated environment that contains,
+
+* [`azureml-inference-server-http`](how-to-inference-server-http.md)
+* [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
+* `pandas`
+* The scoring script baked into the image
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model
In this code snippets used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint: ## Deploy using CLI (v2)
This example shows how you can deploy an MLflow model to an online endpoint usin
__create-endpoint.yaml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+ :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/mlflow/create-endpoint.yaml":::
1. To create a new endpoint using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
1. Create a YAML configuration file for the deployment. The following example configures a deployment of the `sklearn-diabetes` model to the endpoint created in the previous step: > [!IMPORTANT]
- > For MLflow no-code-deployment (NCD) to work, setting **`model_format`** to **`mlflow`** is mandatory. For more information, see the [CLI (v2) model YAML schema](reference-yaml-model.md).
+ > For MLflow no-code-deployment (NCD) to work, setting **`type`** to **`mlflow_model`** is required, `type: mlflow_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
__sklearn-deployment.yaml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
+ :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
1. To create the deployment using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
### Invoke the endpoint Once your deployment completes, use the following command to make a scoring request to the deployed endpoint. The [sample-request-sklearn.json](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/mlflow/sample-request-sklearn.json) file used in this command is located in the `/cli/endpoints/online/mlflow` directory of the azure-examples repo: **sample-request-sklearn.json** The response will be similar to the following text:
The response will be similar to the following text:
Once you're done with the endpoint, use the following command to delete it: ## Deploy using Azure Machine Learning studio
This example shows how you can deploy an MLflow model to an online endpoint usin
$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json name: sklearn-diabetes-mlflow version: 1
- local_path: sklearn-diabetes/model
- model_format: mlflow
+ path: sklearn-diabetes/model
+ type: mlflow_modelΓÇï
description: Scikit-learn MLflow model. ```
This example shows how you can deploy an MLflow model to an online endpoint usin
1. Provide a name and authentication type for the endpoint, and then select __Next__. 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
- 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need scoring script and environment.
+ 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models":::
This example shows how you can deploy an MLflow model to an online endpoint usin
This section helps you understand how to deploy models to an online endpoint once you have completed your [training job](how-to-train-cli.md).
-1. Download the outputs from the training job. The outputs contain the model folder.
+1. Download the outputs from the training job. The outputs contain the model folder.
> [!NOTE] > If you have used `mlflow.autolog()` in your training script, you will see model artifacts in the job's run history. Azure Machine Learning integrates with MLflow's tracking functionality. You can use `mlflow.autolog()` for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs.
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
description: 'Learn to deploy your model with NVIDIA Triton Inference Server in
Previously updated : 11/03/2021 Last updated : 03/31/2022
ms.devlang: azurecli
-# High-performance serving with Triton Inference Server (Preview)
+# High-performance serving with Triton Inference Server (Preview)
+ Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [Managed online endpoints](concept-endpoints.md#managed-online-endpoints).
This section shows how you can deploy Triton to managed online endpoint using th
1. Use the following command to set the name of the endpoint that will be created. In this example, a random name is created for the endpoint:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="set_endpoint_name":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="set_endpoint_name":::
1. Install Python requirements using the following commands:
This section shows how you can deploy Triton to managed online endpoint using th
__create-managed-endpoint.yaml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-endpoint.yaml":::
+ :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/triton/single-model/create-managed-endpoint.yaml":::
1. To create a new endpoint using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
1. Create a YAML configuration file for the deployment. The following example configures a deployment named __blue__ to the endpoint created in the previous step. The one used in the following commands is located at `/cli/endpoints/online/triton/single-model/create-managed-deployment.yml` in the azureml-examples repo you cloned earlier: > [!IMPORTANT]
- > For Triton no-code-deployment (NCD) to work, setting **`model_format`** to **`Triton`** is required. For more information, [check CLI (v2) model YAML schema](reference-yaml-model.md).
+ > For Triton no-code-deployment (NCD) to work, setting **`type`** to **`triton_modelΓÇï`** is required, `type: triton_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
> > This deployment uses a Standard_NC6s_v3 VM. You may need to request a quota increase for your subscription before you can use this VM. For more information, see [NCv3-series](../virtual-machines/ncv3-series.md).
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-deployment.yaml":::
+ :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/triton/single-model/create-managed-deployment.yaml":::
1. To create the deployment using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
### Invoke your endpoint
Once your deployment completes, use the following command to make a scoring requ
1. To get the endpoint scoring uri, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
1. To get an authentication token, use the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
1. To score data with the endpoint, use the following command. It submits the image of a peacock (https://aka.ms/peacock-pic) to the endpoint:
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
The response from the script is similar to the following text:
Once your deployment completes, use the following command to make a scoring requ
Once you're done with the endpoint, use the following command to delete it: Use the following command to delete your model:
This section shows how you can deploy Triton to managed online endpoint using [A
```yml name: densenet-onnx-model version: 1
- local_path: ./models
- model_format: Triton
+ path: ./models
+ type: triton_modelΓÇï
description: Registering my Triton format model. ```
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Previously updated : 10/21/2021-- Last updated : 03/31/2022++ # Manage Azure Machine Learning environments with the CLI (v2) (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Azure Machine Learning environments define the execution environments for your jobs or deployments and encapsulate the dependencies for your code. Azure ML uses the environment specification to create the Docker container that your training or scoring code runs in on the specified compute target. You can define an environment from a conda specification, Docker image, or Docker build context.
There are two types of environments in Azure ML: curated and custom environments
Curated environments are provided by Azure ML and are available in your workspace by default. Azure ML routinely updates these environments with the latest framework version releases and maintains them for bug fixes and security patches. They are backed by cached Docker images, which reduces job preparation cost and model deployment time.
-You can use these curated environments out of the box for training or deployment by referencing a specific environment using the `azureml:<curated-environment-name>:<version>` syntax. You can also use them as reference for your own custom environments by modifying the Dockerfiles that back these curated environments.
+You can use these curated environments out of the box for training or deployment by referencing a specific environment using the `azureml:<curated-environment-name>:<version>` or `azureml:<curated-environment-name>@latest` syntax. You can also use them as reference for your own custom environments by modifying the Dockerfiles that back these curated environments.
You can see the set of available curated environments in the Azure ML studio UI, or by using the CLI (v2) via `az ml environments list`.
az ml environment create --file assets/environment/docker-image.yml
Instead of defining an environment from a prebuilt image, you can also define one from a Docker [build context](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context). To do so, specify the directory that will serve as the build context. This directory should contain a Dockerfile and any other files needed to build the image.
-The following example is a YAML specification file for an environment defined from a build context. The local path to the build context folder is specified in the `build.local_path` field, and the relative path to the Dockerfile within that build context folder is specified in the `build.dockerfile_path` field. If `build.dockerfile_path` is omitted in the YAML file, Azure ML will look for a Dockerfile named `Dockerfile` at the root of the build context.
+The following example is a YAML specification file for an environment defined from a build context. The local path to the build context folder is specified in the `build.path` field, and the relative path to the Dockerfile within that build context folder is specified in the `build.dockerfile_path` field. If `build.dockerfile_path` is omitted in the YAML file, Azure ML will look for a Dockerfile named `Dockerfile` at the root of the build context.
In this example, the build context contains a Dockerfile named `Dockerfile` and a `requirements.txt` file that is referenced within the Dockerfile for installing Python packages.
az ml environment update --name docker-image-example --version 1 --set descripti
> [!IMPORTANT] > For environments, only `description` and `tags` can be updated. All other properties are immutable; if you need to change any of those properties you should create a new version of the environment.
-### Delete
+### Archive and restore
-Delete a specific environment:
+Archiving an environment will hide it by default from list queries (`az ml environment list`). You can still continue to reference and use an archived environment in your workflows. You can archive either an environment container or a specific environment version.
+Archiving an environment container will archive all versions of the environment under that given name. If you create a new environment version under an archived environment container, that new version will automatically be set as archived as well.
+
+Archive an environment container:
+```cli
+az ml environment archive --name docker-image-example
+```
+
+Archive a specific environment version:
```cli
-az ml environment delete --name docker-image-example --version 1
+az ml environment archive --name docker-image-example --version 1
```
+You can restore an archived environment to no longer hide it from list queries.
+
+If an entire environment container is archived, you can restore that archived container. You cannot restore only a specific environment version if the entire environment container is archived - you will need to restore the entire container.
+
+Restore an environment container:
+```cli
+az ml environment restore --name docker-image-example
+```
+
+If only individual environment version(s) within an environment container are archived, you can restore those individual version(s).
+
+Restore a specific environment version:
+```cli
+az ml environment restore --name docker-image-example --version 1
+```
+ ## Use environments for training
-To use an environment for a training job, specify the `environment` field of the job YAML configuration. You can either reference an existing registered Azure ML environment via `environment: azureml:<environment-name>:<environment-version>`, or define an environment specification inline. If defining an environment inline, do not specify the `name` and `version` fields, as these environments are treated as "anonymous" environments and are not tracked in your environment asset registry.
+To use an environment for a training job, specify the `environment` field of the job YAML configuration. You can either reference an existing registered Azure ML environment via `environment: azureml:<environment-name>:<environment-version>` or `environment: azureml:<environment-name>@latest` (to reference the latest version of an environment), or define an environment specification inline. If defining an environment inline, do not specify the `name` and `version` fields, as these environments are treated as "unregistered" environments and are not tracked in your environment asset registry.
When you submit a training job, the building of a new environment can take several minutes. The duration depends on the size of the required dependencies. The environments are cached by the service. So as long as the environment definition remains unchanged, you incur the full setup time only once.
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
-+ Previously updated : 10/21/2021 Last updated : 03/31/2022
You should see the endpoint identified by `$ENDPOINT_NAME` and, a deployment cal
## Scale your existing deployment to handle more traffic
-In the deployment described in [Deploy and score a machine learning model with an online endpoint (preview)](how-to-deploy-managed-online-endpoints.md), you set the `instance_count` to the value `1` in the deployment yaml file. You can scale out using the `update` command :
+In the deployment described in [Deploy and score a machine learning model with an online endpoint (preview)](how-to-deploy-managed-online-endpoints.md), you set the `instance_count` to the value `1` in the deployment yaml file. You can scale out using the `update` command:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="scale_blue" :::
If you aren't going use the deployment, you should delete it with:
- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md) - [Managed online endpoints SKU list (preview)](reference-managed-online-endpoints-vm-sku-list.md) - [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md)-- [Managed online endpoints (preview) YAML reference](reference-yaml-endpoint-managed-online.md)
+- [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 03/07/2022 Last updated : 04/04/2022
In this article you learn how to secure the following inferencing resources in a
* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace. Otherwise, the virtual network can be in a different resource group. * If your workspace has a __private endpoint__, the virtual network used for Azure Container Instances must be the same as the one used by the workspace private endpoint.
-* When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace can't be in the virtual network.
+
+> [!WARNING]
+> When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace can't be in the virtual network. Because of this limitation, we do not recommend Azure Container instances for secure deployments with Azure Machine Learning.
### Azure Kubernetes Service
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Previously updated : 10/21/2021-- Last updated : 03/31/2022++ # Train models with the CLI (v2) (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] The Azure Machine Learning CLI (v2) is an Azure CLI extension enabling you to accelerate the model training process while scaling up and out on Azure compute, with the model lifecycle tracked and auditable.
Using `--depth 1` clones only the latest commit to the repository, which reduces
### Create compute
-You can create an Azure Machine Learning compute cluster from the command line. For instance, the following commands will create one cluster named `cpu-cluster` and one named `gpu-cluster`. (This code assumes you've first followed the steps in [the v2 installation prerequisite](how-to-configure-cli.md#set-up) to configure the default --workspace/-w and --resource-group/-g parameters.)
+You can create an Azure Machine Learning compute cluster from the command line. For instance, the following commands will create one cluster named `cpu-cluster` and one named `gpu-cluster`.
You are not charged for compute at this point as `cpu-cluster` and `gpu-cluster` will remain at zero nodes until a job is submitted. Learn more about how to [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
YAML job specification values can be overridden using `--set` when creating or u
## Job names
-Most `az ml job` commands other than `create` and `list` require `--name/-n`, which is a job's name or "Run ID" in the studio. You should not directly set a job's `name` property during creation as it must be unique per workspace. Azure Machine Learning generates a random GUID for the job name if it is not set which can be obtained from the output of job creation in the CLI or by copying the "Run ID" property in the studio and MLflow APIs.
+Most `az ml job` commands other than `create` and `list` require `--name/-n`, which is a job's name or "Run ID" in the studio. You typically should not directly set a job's `name` property during creation as it must be unique per workspace. Azure Machine Learning generates a random GUID for the job name if it is not set that can be obtained from the output of job creation in the CLI or by copying the "Run ID" property in the studio and MLflow APIs.
To automate jobs in scripts and CI/CD flows, you can capture a job's name when it is created by querying and stripping the output by adding `--query name -o tsv`. The specifics will vary by shell, but for Bash:
You can run this job:
## Track models and source code
-Production machine learning models need to be auditable (if not reproducible). It is crucial to keep track of the source code for a given model. Azure Machine Learning takes a snapshot of your source code and keeps it with the job. Additionally, the source repository and commit are kept if you are running jobs from a Git repository.
+Production machine learning models need to be auditable (if not reproducible). It is crucial to keep track of the source code for a given model. Azure Machine Learning takes a snapshot of your source code and keeps it with the job. Additionally, the source repository and commit are tracked if you are running jobs from a Git repository.
> [!TIP] > If you're following along and running from the examples repository, you can see the source repository and commit in the studio on any of the jobs run so far.
-You can specify the `code.local_path` key in a job with the value as the path to a source code directory. A snapshot of the directory is taken and uploaded with the job. The contents of the directory are directly available from the working directory of the job.
+You can specify the `code` field in a job with the value as the path to a source code directory. A snapshot of the directory is taken and uploaded with the job. The contents of the directory are directly available from the working directory of the job.
> [!WARNING] > The source code should not include large data inputs for model training. Instead, [use data inputs](#data-inputs). You can use a `.gitignore` file in the source code directory to exclude files from the snapshot. The limits for snapshot size are 300 MB or 2000 files.
Literal inputs to jobs can be [converted to search space inputs](#search-space-i
For a sweep job, you can specify a search space for literal inputs to be chosen from. For the full range of options for search space inputs, see the [sweep job YAML syntax reference](reference-yaml-job-sweep.md).
-> [!WARNING]
-> Sweep jobs are not currently supported in pipeline jobs.
- Let's demonstrate the concept with a simple Python script that takes in arguments and logs a random metric: :::code language="python" source="~/azureml-examples-main/cli/jobs/basics/src/hello-sweep.py":::
And run:
:::code language="azurecli" source="~/azureml-examples-main/cli/train.sh" id="iris_folder":::
+Make sure you accurately specify the input `type` field to either `type: uri_file` or `type: uri_folder` corresponding to whether the data points to a single file or a folder. The default if the `type` field is omitted is `uri_folder`.
+ #### Private data For private data in Azure Blob Storage or Azure Data Lake Storage connected to Azure Machine Learning through a datastore, you can use Azure Machine Learning URIs of the format `azureml://datastores/<DATASTORE_NAME>/paths/<PATH_TO_DATA>` for input data. For instance, if you upload the Iris CSV to a directory named `/example-data/` in the Blob container corresponding to the datastore named `workspaceblobstore` you can modify a previous job to use the file in the datastore:
Or the entire directory:
### Default outputs
-The `./outputs` and `./logs` directories receive special treatment by Azure Machine Learning. If you write any files to these directories during your job, these files will get uploaded to the job so that you can still access them once it is complete. The `./outputs` folder is uploaded at the end of the job, while the files written to `./logs` are uploaded in real time. Use the latter if you want to stream logs during the job, such as TensorBoard logs.
+The `./outputs` and `./logs` directories receive special treatment by Azure Machine Learning. If you write any files to these directories during your job, these files will get uploaded to the job so that you can still access them once the job is complete. The `./outputs` folder is uploaded at the end of the job, while the files written to `./logs` are uploaded in real time. Use the latter if you want to stream logs during the job, such as TensorBoard logs.
+
+In addition, any files logged from MLflow via autologging or `mlflow.log_*` for artifact logging will get automatically persisted as well. Collectively with the aforementioned `./outputs` and `./logs` directories, this set of files and directories will be persisted to a directory that corresponds to that job's default artifact location.
You can modify the "hello world" job to output to a file in the default outputs directory instead of printing to `stdout`:
To register a model, you can download the outputs and create a model from the lo
:::code language="azurecli" source="~/azureml-examples-main/cli/train.sh" id="sklearn_download_register_model":::
+For the full set of configurable options for running command jobs, see the [command job YAML schema reference](reference-yaml-job-command.md).
+ ## Sweep hyperparameters You can modify the previous job to sweep over hyperparameters:
And run it:
> [!TIP] > Check the "Child runs" tab in the studio to monitor progress and view parameter charts..
-For more sweep options, see the [sweep job YAML syntax reference](reference-yaml-job-sweep.md).
+For the full set of configurable options for sweep jobs, see the [sweep job YAML schema reference](reference-yaml-job-sweep.md).
## Distributed training
The CIFAR-10 dataset in `torchvision` expects as input a directory that contains
:::code language="azurecli" source="~/azureml-examples-main/setup-repo/create-datasets.sh" id="download_untar_cifar":::
-Then create an Azure Machine Learning dataset from the local directory, which will be uploaded to the default datastore:
+Then create an Azure Machine Learning data asset from the local directory, which will be uploaded to the default datastore:
:::code language="azurecli" source="~/azureml-examples-main/setup-repo/create-datasets.sh" id="create_cifar"::: Optionally, remove the local file and directory:
-Datasets (File only) can be referred to in a job using the `dataset` key of a data input. The format is `azureml:<DATASET_NAME>:<DATASET_VERSION>`, so for the CIFAR-10 dataset just created, it is `azureml:cifar-10-example:1`.
+Registered data assets can be used as inputs to job using the `path` field for a job input. The format is `azureml:<data_name>:<data_version>`, so for the CIFAR-10 dataset just created, it is `azureml:cifar-10-example:1`. You can optionally use the `azureml:<data_name>@latest` syntax instead if you want to reference the latest version of the data asset. Azure ML will resolve that reference to the explicit version.
-With the dataset in place, you can author a distributed PyTorch job to train our model:
+With the data asset in place, you can author a distributed PyTorch job to train our model:
And run it:
The CIFAR-10 example above translates well to a pipeline job. The previous job c
- "eval-model" to take the data and the trained model and evaluate accuracy Both "train-model" and "eval-model" will have a dependency on the "get-data" job's output. Additionally, "eval-model" will have a dependency on the "train-model" job's output. Thus the three jobs will run sequentially.-
+<!--
You can orchestrate these three jobs within a pipeline job: :::code language="yaml" source="~/azureml-examples-main/cli/jobs/pipelines/cifar-10/job.yml"::: And run: Pipelines can also be written using reusable components. For more, see [Create and run components-based machine learning pipelines with the Azure Machine Learning CLI (Preview)](how-to-create-component-pipelines-cli.md).
machine-learning How To Train With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-rest.md
--- Previously updated : 10/21/2021-++ Last updated : 03/31/2022+
Learn how to use the Azure Machine Learning REST API to create and manage training jobs (preview). [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
Administrative REST requests a [service principal authentication token](how-to-m
TOKEN=$(az account get-access-token --query accessToken -o tsv) ```
-The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. The current Azure Machine Learning API version is `2021-03-01-preview`. Set the API version as a variable to accommodate future versions:
+The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. The current Azure Machine Learning API version is `2022-02-01-preview`. Set the API version as a variable to accommodate future versions:
```bash
-API_VERSION="2021-03-01-preview"
+API_VERSION="2022-02-01-preview"
``` ### Compute
The LightGBM example needs to run in a LightGBM environment. Create the environm
You can configure the docker image with `Docker` and add conda dependencies with `condaFile`: ### Datastore
AZURE_STORAGE_KEY=$(az storage account keys list --account-name $AZURE_STORAGE_A
### Data
-Now that you have the datastore, you can create a dataset. For this example, use the common dataset `iris.csv` and point to it in the `path`.
+Now that you have the datastore, you can create a dataset. For this example, use the common dataset `iris.csv`.
### Code
az storage blob upload-batch -d $AZUREML_DEFAULT_CONTAINER/src \
-s jobs/train/lightgbm/iris/src --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY ```
-Once you upload your code, you can specify your code with a PUT request and refer to the datastore with `datastoreId`.
+Once you upload your code, you can specify your code with a PUT request and reference the url through `codeUri`.
## Submit a training job
Now that your assets are in place, you can run the LightGBM job, which outputs a
- **run_id**: [Optional] The name of the job, which must be unique across all jobs. Unless a name is specified either in the YAML file via the `name` field or the command line via `--name/-n`, a GUID/UUID is automatically generated and used for the name. - **jobType**: The job type. For a basic training job, use `Command`.-- **codeId**: The path to your training script.
+- **codeId**: The ARMId reference of the name and version of your training script.
- **command**: The command to execute. Input data can be written into the command and can be referred to with data binding. -- **environmentId**: The path to your environment.-- **inputDataBindings**: Data binding can help you reference input data. Create an environment variable and the name of the binding will be added to AZURE_ML_INPUT_, which you can refer to in `command`. To create a data binding, you need to add the path to the data you created as `dataId`.
+- **environmentId**: The ARMId reference of the name and version of your environment.
+- **inputDataBindings**: Data binding can help you reference input data. Create an environment variable and the name of the binding will be added to AZURE_ML_INPUT_, which you can refer to in `command`. You can directly reference a public blob url file as a `UriFile` through the `uri` parameter.
- **experimentName**: [Optional] Tags the job to help you organize jobs in Azure Machine Learning studio. Each job's run record is organized under the corresponding experiment in the studio "Experiment" tab. If omitted, tags default to the name of the working directory when the job is created.-- **compute**: The `target` specifies the compute target, which can be the path to your compute. `instanceCount` specifies the number of instances you need for the job.
+- **computeId**: The `computeId` specifies the compute target name through an ARMId.
Use the following commands to submit the training job: ## Submit a hyperparameter sweep job Azure Machine Learning also lets you efficiently tune training hyperparameters. You can create a hyperparameter tuning suite, with the REST APIs. For more information on Azure Machine Learning's hyperparameter tuning options, see [Hyperparameter tuning a model](how-to-tune-hyperparameters.md). Specify the hyperparameter tuning parameters to configure the sweep: - **jobType**: The job type. For a sweep job, it will be `Sweep`. -- **algorithm**: The sampling algorithm - "random" is often a good place to start. See the sweep job [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options.
+- **algorithm**: The sampling algorithm class - class "random" is often a good place to start. See the sweep job [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options.
- **trial**: The command job configuration for each trial to be run. - **objective**: The `primaryMetric` is the optimization metric, which must match the name of a metric logged from the training code. The `goal` specifies the direction (minimize or maximize). See the [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the full enumeration of options. -- **searchSpace**: A dictionary of the hyperparameters to sweep over. The key is a name for the hyperparameter, for example, `learning_rate`. The value is the hyperparameter distribution. See the [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options.-- **maxTotalTrials**: The maximum number of individual trials to run.-- **maxConcurrentTrials**: [Optional] The maximum number of trials to run concurrently on your compute cluster.-- **timeout**: [Optional] The maximum number of minutes to run the sweep job for.
+- **searchSpace**: A generic object of hyperparameters to sweep over. The key is a name for the hyperparameter, for example, `learning_rate`. The value is the hyperparameter distribution. See the [schema](https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json) for the enumeration of options.
+- **Limits**: `JobLimitsType` of type `sweep` is an object definition of the sweep job limits parameters. `maxTotalTrials` [Optional] is the maximum number of individual trials to run. `maxConcurrentTrials` is the maximum number of trials to run concurrently on your compute cluster.
To create a sweep job with the same LightGBM example, use the following commands: ## Next steps
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
Previously updated : 10/21/2021 Last updated : 03/31/2022 #Customer intent: As an ML Deployment Pro, I want to figure out why my batch endpoint doesn't run so that I can fix it. # Troubleshooting batch endpoints (preview) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) (preview) for batch scoring.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
- Previously updated : 11/03/2021+ Last updated : 03/31/2022 #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
The section [HTTP status codes](#http-status-codes) explains how invocation and
## Deploy locally
-Local deployment is deploying a model to a local Docker environment. Local deployment is useful for testing and debugging before to deployment to the cloud.
+Local deployment is deploying a model to a local Docker environment. Local deployment is useful for testing and debugging before deployment to the cloud.
> [!TIP] > Use Visual Studio Code to test and debug your endpoints locally. For more information, see [debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
As a part of local deployment the following steps take place:
For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints).
+## Conda installation
+
+Generally, issues with mlflow deployment stem from issues with the installation of the user environment specified in the `conda.yaml` file.
+
+To debug conda installation problems, try the following:
+
+1. Check the logs for conda installation. If the container crashed or taking too long to start up, it is likely that conda environment update has failed to resolve correctly.
+
+1. Install the mlflow conda file locally with the command `conda env create -n userenv -f <CONDA_ENV_FILENAME>`.
+
+1. If there are errors locally, try resolving the conda environment and creating a functional one before redeploying.
+
+1. If the container crashes even if it resolves locally, the SKU size used for deployment may be too small.
+ 1. Conda package installation occurs at runtime, so if the SKU size is too small to accommodate all of the packages detailed in the `conda.yaml` environment file, then the container may crash.
+ 1. A Standard_F4s_v2 VM is a good starting SKU size, but larger ones may be needed depending on which dependencies are specified in the conda file.
+ ## Get container logs You can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM. The amount of information depends on the provisioning status of the deployment. If the specified container is up and running you'll see its console output, otherwise you'll get a message to try again later.
Try to delete some unused endpoints in this subscription.
#### Kubernetes quota
-The requested CPU or memory couldn't be satisfied. Please adjust your request or the cluster.
+The requested CPU or memory couldn't be satisfied. Adjust your request or the cluster.
#### Other quota
To run the `score.py` provided as part of the deployment, Azure creates a contai
### ERROR: ResourceNotFound
-This error occurs when Azure Resource Manager can't find a required resource. For example, you will receive this error if a storage account was referred to but cannot be found at the path on which it was specified. Be sure to double check resources which might have been supplied by exact path or the spelling of their names.
+This error occurs when Azure Resource Manager can't find a required resource. For example, you will receive this error if a storage account was referred to but cannot be found at the path on which it was specified. Be sure to double check resources that might have been supplied by exact path or the spelling of their names.
For more information, see [Resolve resource not found errors](../azure-resource-manager/troubleshooting/error-not-found.md).
If you are having trouble with autoscaling, see [Troubleshooting Azure autoscale
## Bandwidth limit issues
-Managed online endpoints have bandwidth limits for each endpoints. You find the limit configuration in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview) here. If your bandwidth usage exceeds the limit, your request will be delayed. To monitor the bandwidth delay:
+Managed online endpoints have bandwidth limits for each endpoint. You find the limit configuration in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview) here. If your bandwidth usage exceeds the limit, your request will be delayed. To monitor the bandwidth delay:
- Use metric ΓÇ£Network bytesΓÇ¥ to understand the current bandwidth usage. For more information, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md). - There are two response trailers will be returned if the bandwidth limit enforced:
When you access online endpoints with REST requests, the returned status codes a
- [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md) - [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)-- [Managed online endpoints (preview) YAML reference](reference-yaml-endpoint-managed-online.md)
+- [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
- Previously updated : 10/21/2021-+ Last updated : 03/31/2022+ # Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
# Use batch endpoints (preview) for batch scoring [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Learn how to use batch endpoints (preview) to do batch scoring. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
Set your endpoint name. Replace `YOUR_ENDPOINT_NAME` with a unique name within a
For Unix, run this command: For Windows, run this command:
set ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
Batch endpoint runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster. Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`. > [!NOTE] > You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`. The following table describes the key properties of the endpoint YAML. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
For more information about how to reference an Azure ML entity, see [Referencing
The example repository contains all the required files. The following YAML file defines a batch deployment with all the required inputs and optional settings. You can include this file in your CLI command to [create your batch deployment](#create-a-batch-deployment). In the repository, this file is located at `/cli/endpoints/batch/nonmlflow-deployment.yml`. The following table describes the key properties of the deployment YAML. For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
The following table describes the key properties of the deployment YAML. For the
| `$schema` | [Optional] The YAML schema. You can view the schema in the above example in a browser to see all available options for a batch deployment YAML file. | | `name` | The name of the deployment. | | `endpoint_name` | The name of the endpoint to create the deployment under. |
-| `model` | The model to be used for batch scoring. The example defines a model inline using `local_path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
-| `code_configuration.code.local_path` | The directory that contains all the Python source code to score the model. |
+| `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+| `code_configuration.code.path` | The directory that contains all the Python source code to score the model. |
| `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. Make sure that enough data is included in your `run()` response to correlate the input with the output. | | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. | | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and reference it using `azureml:<compute-name>` syntax. |
Now, let's deploy the model with batch endpoints and run batch scoring.
The simplest way to create a batch endpoint is to run the following code providing only a `--name`. You can also create a batch endpoint using a YAML file. Add `--file` parameter in above command and specify the YAML file path.
You can also create a batch endpoint using a YAML file. Add `--file` parameter i
Run the following code to create a batch deployment named `nonmlflowdp` under the batch endpoint and set it as the default deployment. > [!TIP] > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#deploy-a-new-model) section.
Use `show` to check endpoint and deployment details.
To check a batch deployment, run the following code: To check a batch endpoint, run the following code. As the newly created deployment is set as the default deployment, you should see `nonmlflowdp` in `defaults.deployment_name` from the response. ### Invoke the batch endpoint to start a batch scoring job
There are three options to specify the data inputs in CLI `invoke`.
The example uses publicly available data in a folder from `https://pipelinedata.blob.core.windows.net/sampledata/mnist`, which contains thousands of hand-written digits. Name of the batch scoring job will be returned from the invoke response. Run the following code to invoke the batch endpoint using this data. `--query name` is added to only return the job name from the invoke response, and it will be used later to [Monitor batch scoring job execution progress](#monitor-batch-scoring-job-execution-progress) and [Check batch scoring results](#check-batch-scoring-results). Remove `--query name -o tsv` if you want to see the full invoke response. For more information on the `--query` parameter, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job" :::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/batch-score.sh" ID="start_batch_scoring_job" :::
* __Option 2: Registered dataset__
- Use `--input-dataset` to pass in an Azure Machine Learning registered dataset. To create a dataset, check `az ml dataset create -h` for instruction, and follow the [Dataset schema](reference-yaml-dataset.md#yaml-syntax).
+ Use `--input-dataset` to pass in an Azure Machine Learning registered dataset. To create a dataset, check `az ml dataset create -h` for instruction, and follow the [Dataset schema](reference-yaml-data.md#yaml-syntax).
> [!NOTE] > FileDataset that is created using the preceding version of the CLI and Python SDK can also be used. TabularDataset is not supported.
Some settings can be overwritten when invoke to make best use of the compute res
To specify the output location and overwrite settings when invoke, run the following code. The example stores the outputs in a folder with the same name as the endpoint in the workspace's default blob storage, and also uses a random file name to ensure the output location uniqueness. The code should work in Unix. Replace with your own unique folder and file name. ### Monitor batch scoring job execution progress
Batch scoring jobs usually take some time to process the entire set of inputs.
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`. ### Check batch scoring results
Follow the below steps to view the scoring results in Azure Storage Explorer whe
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
+ :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/batch-score.sh" ID="show_job_in_studio" :::
1. In the graph of the run, select the `batchscoring` step. 1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
Once you have a batch endpoint, you can continue to refine your model and add ne
To create a new batch deployment under the existing batch endpoint but not set it as the default deployment, run the following code: Notice that `--set-default` is not used. If you `show` the batch endpoint again, you should see no change of the `defaults.deployment_name`.
The example uses a model (`/cli/endpoints/batch/autolog_nyc_taxi`) trained and t
Below is the YAML file the example uses to deploy an MLflow model, which only contains the minimum required properties. The source file in repository is `/cli/endpoints/batch/mlflow-deployment.yml`. > [!NOTE] > `scoring_script` and `environment` auto generation only supports Python Function model flavor and column-based model signature.
Below is the YAML file the example uses to deploy an MLflow model, which only co
To test the new non-default deployment, run the following code. The example uses a different model that accepts a publicly available csv file from `https://pipelinedata.blob.core.windows.net/sampledata/nytaxi/taxi-tip-data.csv`. Notice `--deployment-name` is used to specify the new deployment name. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
Notice `--deployment-name` is used to specify the new deployment name. This para
To update the default batch deployment of the endpoint, run the following code: Now, if you `show` the batch endpoint again, you should see `defaults.deployment_name` is set to `mlflowdp`. You can `invoke` the batch endpoint directly without the `--deployment-name` parameter.
If you want to update the deployment (for example, update code, model, environme
If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion. Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs will not be deleted. ## Next steps
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
Previously updated : 10/21/2021 Last updated : 03/31/2022
In this article, you learn how to use batch endpoints (preview) to do batch scoring in [Azure Machine Learning studio](https://ml.azure.com). For more, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md). + In this article, you learn about: > [!div class="checklist"]
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+ # CLI (v2) command component YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `description` | string | Description of the component. | | | | `tags` | object | Dictionary of tags for the component. | | | | `command` | string | **Required.** The command to execute. | | |
-| `code.local_path` | string | Local path to the source code directory to be uploaded and used for the component. | | |
+| `code` | string | Local path to the source code directory to be uploaded and used for the component. | | |
| `environment` | string or object | **Required.** The environment to use for the component. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | | | `distribution` | object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). | | | | `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | **Required.** The type of component input. <br><br> Use `type: path` if you want the runtime job input value to be a data URI or Azure ML dataset when the component is run. | `number`, `integer`, `boolean`, `string`, `path` | |
+| `type` | string | **Required.** The type of component input. <br><br> Use `type: uri_file/uri_folder` if you want the runtime job input value to be a data URI or registered Azure ML data asset when the component is run. | `number`, `integer`, `boolean`, `string`, `uri_file`, `uri_folder` | |
| `description` | string | Description of the input. | | | | `default` | number, integer, boolean, or string | The default value for the input. | | | | `optional` | boolean | Whether the input is required. | | `false` |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | **Required.** The type of component output. | `path` | |
+| `type` | string | **Required.** The type of component output. | `uri_folder` | |
| `description` | string | Description of the output. | | | ## Remarks
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
+
+ Title: 'CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema'
+
+description: Reference documentation for the CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema.
+++++++ Last updated : 03/31/2022+++
+# CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json.
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `type` | string | **Required.** The type of compute. | `kubernetes` | |
+| `name` | string | **Required.** Name of the compute. | | |
+| `description` | string | Description of the compute. | | |
+| `resource_id` | string | Fully qualified resource ID of the Azure Arc-enabled Kubernetes cluster to attach to the workspace as a compute target. | | |
+| `namespace` | string | The Kubernetes namespace to use for the compute target. The namespace must be created in the Kubernetes cluster before the cluster can be attached to the workspace as a compute target. All Azure ML workloads running on this compute target will run under the namespace specified in this field. | | |
+| `identity` | object | The managed identity configuration to assign to the compute. KubernetesCompute clusters support only one system-assigned identity or multiple user-assigned identities, not both concurrently. | | |
+| `identity.type` | string | The type of managed identity to assign to the compute. If the type is `user_assigned`, the `identity.user_assigned_identities` property must also be specified. | `system_assigned`, `user_assigned` | |
+| `identity.user_assigned_identities` | array | List of fully qualified resource IDs of the user-assigned identities. | | |
+
+## Remarks
+
+The `az ml compute` commands can be used for managing Azure Arc-enabled Kubernetes clusters (KubernetesCompute) attached to an Azure Machine Learning workspace.
+
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
+- [Configure and attach Azure Arc-enabled Kubernetes clusters](how-to-attach-arc-kubernetes.md)
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+ # CLI (v2) core YAML syntax
This article provides an overview of core syntax concepts you will encounter whi
## Referencing an Azure ML entity
-Azure ML provides a reference syntax (consisting of a shorthand and longhand format) for referencing an existing Azure ML entity when configuring a YAML file. For example, you can reference an existing registered environment in your workspace to use at the environment for a job.
+Azure ML provides a reference syntax (consisting of a shorthand and longhand format) for referencing an existing Azure ML entity when configuring a YAML file. For example, you can reference an existing registered environment in your workspace to use as the environment for a job.
-### Shorthand
+### Referencing an Azure ML asset
-The shorthand syntax consists of the following:
+There are two options for referencing an Azure ML asset (environments, models, data, and components):
+* Reference an explicit version of an asset:
+ * Shorthand syntax: `azureml:<asset_name>:<asset_version>`
+ * Longhand syntax, which includes the Azure Resource Manager (ARM) resource ID of the asset:
+ ```
+ azureml:/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>/environments/<environment-name>/versions/<environment-version>
+ ```
+* Reference the latest version of an asset:
-* For assets: `azureml:<asset-name>:<asset-version>`
-* For resources: `azureml:<resource-name>`
+ In some scenarios you may want to reference the latest version of an asset without having to explicitly look up and specify the actual version string itself. The latest version is defined as the latest (also known as most recently) created version of an asset under a given name.
-Azure ML will resolve this reference to the specified asset or resource in the workspace.
+ You can reference the latest version using the following syntax: `azureml:<asset_name>@latest`. Azure ML will resolve the reference to an explicit asset version in the workspace.
-### Longhand
-
-The longhand syntax consists of the `azureml:` prefix plus the ARM resource ID of the entity:
+### Reference an Azure ML resource
+To reference an Azure ML resource (such as compute), you can use either of the following syntaxes:
+* Shorthand syntax: `azureml:<resource_name>`
+* Longhand syntax, which includes the ARM resource ID of the resource:
```
-azureml:/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>/environments/<environment-name>/versions/<environment-version>
+azureml:/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>/compute/<compute-name>
``` ## Azure ML data reference URI
The supported scenarios are covered below.
### Parameterizing the `command` with the `inputs` and `outputs` contexts of a job
-You can specify literal values, URI paths, and Azure ML datasets as inputs to a job. The `command` can then be parameterized with references to those input(s) using the `${{inputs.<input-name>}}` syntax. References to literal inputs will get resolved to the literal value at runtime, while references to data URI or Azure ML dataset inputs will get resolved to the download path or mount path (depending on the `mode` specified).
+You can specify literal values, URI paths, and registered Azure ML data assets as inputs to a job. The `command` can then be parameterized with references to those input(s) using the `${{inputs.<input_name>}}` syntax. References to literal inputs will get resolved to the literal value at runtime, while references to data inputs will get resolved to the download path or mount path (depending on the `mode` specified).
-Likewise, outputs to the job can also be referenced in the `command`. For each named output specified in the `outputs` dictionary, Azure ML will autogenerate an output location on the default datastore where you can write files to. The output location for each named output is based on the following templatized path: `<default-datastore>/azureml/<job-name>/<output-name>/`. Parameterizing the `command` with the `${{outputs.<output-name>}}` syntax will resolve that reference to the autogenerated path, so that your script can write files to that location from the job.
+Likewise, outputs to the job can also be referenced in the `command`. For each named output specified in the `outputs` dictionary, Azure ML will system-generate an output location on the default datastore where you can write files to. The output location for each named output is based on the following templatized path: `<default-datastore>/azureml/<job-name>/<output_name>/`. Parameterizing the `command` with the `${{outputs.<output_name>}}` syntax will resolve that reference to the system-generated path, so that your script can write files to that location from the job.
-In the example below for a command job YAML file, the `command` is parameterized with two inputs, a literal input and a URI input, and one output. At runtime, the `${{inputs.learning_rate}}` expression will resolve to `0.01`, and the `${{inputs.iris}}` expression will resolve to the download path of the `iris.csv` file. `${{outputs.model_dir}}` will resolve to the mount path of the autogenerated output location.
+In the example below for a command job YAML file, the `command` is parameterized with two inputs, a literal input and a data input, and one output. At runtime, the `${{inputs.learning_rate}}` expression will resolve to `0.01`, and the `${{inputs.iris}}` expression will resolve to the download path of the `iris.csv` file. `${{outputs.model_dir}}` will resolve to the mount path of the system-generated output location corresponding to the `model_dir` output.
```yaml $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
-code:
- local_path: ./src
+code: ./src
command: python train.py --lr ${{inputs.learning_rate}} --training-data ${{inputs.iris}} --model-dir ${{outputs.model_dir}}
-environment: azureml:AzureML-Minimal:1
+environment: azureml:AzureML-Minimal@latest
compute: azureml:cpu-cluster inputs: learning_rate: 0.01 iris:
- file: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
+ type: uri_file
+ path: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
mode: download outputs: model_dir:
In the example below for a sweep job YAML file, the `${{search_space.learning_ra
```yaml $schema: https://azuremlschemas.azureedge.net/latest/sweepJob.schema.json type: sweep
-sampling_algorithm: random
+sampling_algorithm:
+ type: random
search_space: learning_rate: type: uniform
objective:
goal: minimize primary_metric: test-multi_logloss trial:
- code:
- local_path: src
+ code: ./src
command: >- python train.py --training-data ${{inputs.iris}} --lr ${{search_space.learning_rate}} --boosting ${{search_space.boosting}}
- environment: azureml:AzureML-Minimal:1
+ environment: azureml:AzureML-Minimal@latest
inputs: iris:
- file: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
+ type: uri_file
+ path: https://azuremlexamples.blob.core.windows.net/datasets/iris.csv
mode: download compute: azureml:cpu-cluster ``` ### Binding inputs and outputs between steps in a pipeline job
-Expressions are also used for binding inputs and outputs between steps in a pipeline job. For example, you can bind the input of one job (job #2) in a pipeline to the output of another job (job #1). This usage will signal to Azure ML the dependency flow of the pipeline graph, and job #2 will get executed after job #1, since the output of job #1 is required as an input for job #2.
+Expressions are also used for binding inputs and outputs between steps in a pipeline job. For example, you can bind the input of one job (job B) in a pipeline to the output of another job (job A). This usage will signal to Azure ML the dependency flow of the pipeline graph, and job B will get executed after job A, since the output of job A is required as an input for job B.
For a pipeline job YAML file, the `inputs` and `outputs` sections of each child job are evaluated within the parent context (the top-level pipeline job). The `command`, on the other hand, will resolve to the current context (the child job). There are two ways to bind inputs and outputs in a pipeline job:
-**1) Bind to the top-level inputs and outputs of the pipeline job**
+**Bind to the top-level inputs and outputs of the pipeline job**
-You can bind the inputs or outputs of a child job to the inputs/outputs of the top-level parent pipeline job using the following syntax: `${{inputs.<input-name>}}` or `${{outputs.<output-name>}}`. This reference resolves to the parent context; hence the top-level inputs/outputs.
+You can bind the inputs or outputs of a child job (a pipeline step) to the inputs/outputs of the top-level parent pipeline job using the following syntax: `${{parent.inputs.<input_name>}}` or `${{parent.outputs.<output_name>}}`. This reference resolves to the `parent` context; hence the top-level inputs/outputs.
-In the example below, the output (`model_dir`) of the final `train` step is bound to the top-level pipeline job output via `${{outputs.trained_model}}`
+In the example below, the input (`raw_data`) of the first `prep` step is bound to the top-level pipeline input via `${{parent.inputs.input_data}}`. The output (`model_dir`) of the final `train` step is bound to the top-level pipeline job output via `${{parent.outputs.trained_model}}`.
-**2) Bind to the inputs and outputs of another child job (step)**
+**Bind to the inputs and outputs of another child job (step)**
-To bind the inputs/outputs of one step to the inputs/outputs of another step, use the following syntax: `${{jobs.<step-name>.inputs.<input-name>}}` or `${{jobs.<step-name>.outputs.<outputs-name>}}`. Again, this reference resolves to the parent context, so the context starts with `jobs.<step-name>`.
+To bind the inputs/outputs of one step to the inputs/outputs of another step, use the following syntax: `${{parent.jobs.<step_name>.inputs.<input_name>}}` or `${{parent.jobs.<step_name>.outputs.<outputs_name>}}`. Again, this reference resolves to the parent context, so the expression must start with `parent.jobs.<step_name>`.
-In the example below, the input (`clean_data`) of the `train` step is bound to the output (`prep_data`) of the `prep` step via `${{jobs.prep.outputs.prep_data}}`. The prepared data from the `prep` step will be used as the training data for the `train` step.
+In the example below, the input (`training_data`) of the `train` step is bound to the output (`clean_data`) of the `prep` step via `${{parent.jobs.prep.outputs.clean_data}}`. The prepared data from the `prep` step will be used as the training data for the `train` step.
On the other hand, the context references within the `command` properties will resolve to the current context. For example, the `${{inputs.raw_data}}` reference in the `prep` step's `command` will resolve to the inputs of the current context, which is the `prep` child job. The lookup will be done on `prep.inputs`, so an input named `raw_data` must be defined there.
On the other hand, the context references within the `command` properties will r
$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json type: pipeline inputs:
+ input_data:
+ type: uri_folder
+ path: https://azuremlexamples.blob.core.windows.net/datasets/cifar10/
outputs: trained_model: jobs: prep: type: command inputs:
- raw_data:
- folder:
- mode: rw_mount
+ raw_data: ${{parent.inputs.input_data}}
outputs:
- prep_data:
- mode: upload
- code:
- local_path: src/prep
- environment: azureml:AzureML-Minimal:1
+ clean_data:
+ code: src/prep
+ environment: azureml:AzureML-Minimal@latest
command: >- python prep.py --raw-data ${{inputs.raw_data}}
- --prep-data ${{outputs.prep_data}}
+ --prep-data ${{outputs.clean_data}}
compute: azureml:cpu-cluster train: type: command inputs:
- clean_data: ${{jobs.prep.outputs.prep_data}}
+ training_data: ${{parent.jobs.prep.outputs.clean_data}}
+ num_epochs: 1000
outputs:
- model_dir: $${{outputs.trained_model}}
- code:
- local_path: src/train
- environment: azureml:AzureML-Minimal:1
- compute: azureml:gpu-cluster
+ model_dir: ${{parent.outputs.trained_model}}
+ code: src/train
+ environment: azureml:AzureML-Minimal@latest
command: >- python train.py
- --training-data ${{inputs.clean_data}}
+ --epochs ${{inputs.num_epochs}}
+ --training-data ${{inputs.training_data}}
--model-output ${{outputs.model_dir}}
+ compute: azureml:gpu-cluster
``` ### Parameterizing the `command` with the `inputs` and `outputs` contexts of a component
Similar to the `command` for a job, the `command` for a component can also be pa
```yaml $schema: https://azuremlschemas.azureedge.net/latest/commandComponent.schema.json type: command
-code:
- local_path: ./src
+code: ./src
command: python train.py --lr ${{inputs.learning_rate}} --training-data ${{inputs.iris}} --model-dir ${{outputs.model_dir}}
-environment: azureml:AzureML-Minimal:1
+environment: azureml:AzureML-Minimal@latest
inputs: learning_rate: type: number default: 0.01
- optional: true
iris:
- type: path
+ type: uri_file
outputs: model_dir:
- type: path
+ type: uri_folder
``` ## Next steps
machine-learning Reference Yaml Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md
+
+ Title: 'CLI (v2) data YAML schema'
+
+description: Reference documentation for the CLI (v2) data YAML schema.
++++++++ Last updated : 03/31/2022+++
+# CLI (v2) data YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/data.schema.json.
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `name` | string | **Required.** Name of the data asset. | | |
+| `version` | string | Version of the dataset. If omitted, Azure ML will autogenerate a version. | | |
+| `description` | string | Description of the data asset. | | |
+| `tags` | object | Dictionary of tags for the data asset. | | |
+| `type` | string | The data asset type. Specify `uri_file` for data that points to a single file source, or `uri_folder` for data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
+| `path` | string | Either a local path to the data source file or folder, or the URI of a cloud path to the data source file or folder. Please ensure that the source provided here is compatible with the `type` specified. <br><br> Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. | | |
+
+## Remarks
+
+The `az ml data` commands can be used for managing Azure Machine Learning data assets.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/assets/data). Several are shown below.
+
+## YAML: datastore file
++
+## YAML: datastore folder
++
+## YAML: https file
++
+## YAML: https folder
++
+## YAML: wasbs file
++
+## YAML: wasbs folder
++
+## YAML: local file
++
+## YAML: local folder
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Yaml Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-dataset.md
- Title: 'CLI (v2) dataset YAML schema'-
-description: Reference documentation for the CLI (v2) dataset YAML schema.
-------- Previously updated : 10/21/2021---
-# CLI (v2) dataset YAML schema
--
-The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/dataset.schema.json.
---
-## YAML syntax
-
-| Key | Type | Description | Allowed values |
-| | - | -- | -- |
-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | |
-| `name` | string | **Required.** Name of the dataset. | |
-| `version` | string | Version of the dataset. If omitted, Azure ML will autogenerate a version. | |
-| `description` | string | Description of the dataset. | |
-| `tags` | object | Dictionary of tags for the dataset. | |
-| `local_path` | string | Absolute or relative path of a single local file or folder from which the dataset is created. **One of `local_path` or `paths` is required.** | |
-| `paths` | array | A list of URI sources from which the dataset is created. Each entry in the list should adhere to the schema defined in [Dataset source path](#dataset-source-path). Currently, only a single source is supported. **One of `local_path` or `paths` is required.** | |
-
-### Dataset source path
-
-| Key | Type | Description |
-| | - | -- |
-| `file` | string | URI to a single file used as a source for the dataset. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. **One of `file` or `folder` is required.** |
-| `folder` | string | URI to a folder used as a source for the dataset. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. **One of `file` or `folder` is required.** |
-
-## Remarks
-
-The `az ml dataset` commands can be used for managing Azure Machine Learning datasets.
-
-## Examples
-
-Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/assets/dataset). Several are shown below.
-
-## YAML: datastore file
--
-## YAML: datastore folder
--
-## YAML: https file
--
-## YAML: https folder
--
-## YAML: wasbs file
--
-## YAML: wasbs folder
--
-## YAML: local file
--
-## YAML: local folder
--
-## Next steps
--- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+ # CLI (v2) batch deployment YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | | | `model` | string or object | **Required.** The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. | | | | `code_configuration` | object | Configuration for the scoring code logic. <br><br> This property is not required if your model is in MLflow format. | | |
-| `code_configuration.code.local_path` | string | Local path to the source code directory for scoring the model. | | |
+| `code_configuration.code` | string | Local path to the source code directory for scoring the model. | | |
| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | | | `environment` | string or object | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> This property is not required if your model is in MLflow format. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `compute` | string | **Required.** Name of the compute target to execute the batch scoring jobs on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `retry_settings.timeout` | integer | The timeout in seconds for scoring a mini batch. | | `30` | | `output_action` | string | Indicates how the output should be organized in the output file. | `append_row`, `summary_only` | `append_row` | | `output_file_name` | string | Name of the batch scoring output file. | | `predictions.csv` |
-| `environment_variables` | object | Dictionary of environment variable name-value pairs to set for each batch scoring job. | | |
+| `environment_variables` | object | Dictionary of environment variable key-value pairs to set for each batch scoring job. | | |
## Remarks
machine-learning Reference Yaml Deployment Kubernetes Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-kubernetes-online.md
+
+ Title: 'CLI (v2) Azure Arc-enabled Kubernetes online deployment YAML schema'
+
+description: Reference documentation for the CLI (v2) Azure Arc-enabled Kubernetes online deployment YAML schema.
+++++++ Last updated : 03/31/2022+++
+# CLI (v2) Azure Arc-enabled Kubernetes online deployment YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/kubernetesOnlineDeployment.schema.json.
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
+| `description` | string | Description of the deployment. | | |
+| `tags` | object | Dictionary of tags for the deployment. | | |
+| `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | |
+| `model` | string or object | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios.| | |
+| `model_mount_path` | string | The path to mount the model in a custom container. Applicable only for [custom container deployment](how-to-deploy-custom-container.md) scenarios. If the `model` field is specified, it is mounted on this path in the container. | | |
+| `code_configuration` | object | Configuration for the scoring code logic. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios. | | |
+| `code_configuration.code` | string | Local path to the source code directory for scoring the model. | | |
+| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | |
+| `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | |
+| `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | |
+| `instance_type` | string | The instance type used to place the inference workload. If omitted, the inference workload will be placed on the default instance type of the Kubernetes cluster specified in the endpoint's `compute` field. If specified, the inference workload will be placed on that selected instance type. <br><br> Note that the set of instance types for a Kubernetes cluster is configured via the Kubernetes cluster custom resource definition (CRD), hence they are not part of the Azure ML YAML schema for attaching Kubernetes compute.For more information, see [Create and select Kubernetes instance types](how-to-kubernetes-instance-type.md). | | |
+| `instance_count` | integer | The number of instances to use for the deployment. Specify the value based on the workload you expect. This field is only required if you are using the `default` scale type (`scale_settings.type: default`). <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. | | |
+| `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` |
+| `scale_settings` | object | The scale settings for the deployment. The two types of scale settings supported are the `default` scale type and the `target_utilization` scale type. <br><br> With the `default` scale type (`scale_settings.type: default`), you can manually scale the instance count up and down after deployment creation by updating the `instance_count` property. <br><br> To configure the `target_utilization` scale type (`scale_settings.type: target_utilization`), see [TargetUtilizationScaleSettings](#targetutilizationscalesettings) for the set of configurable properties. | | |
+| `scale_settings.type` | string | The scale type. | `default`, `target_utilization` | `target_utilization` |
+| `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | |
+| `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
+| `readiness_probe` | object | Readiness probe settings for validating if the container is ready to serve traffic. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
+| `resources` | object | Container resource requirements. | | |
+| `resources.requests` | object | Resource requests for the container. See [ContainerResourceRequests](#containerresourcerequests) for the set of configurable properties. | | |
+| `resources.limits` | object | Resource limits for the container. See [ContainerResourceLimits](#containerresourcelimits) for the set of configurable properties. | | |
+
+### RequestSettings
+
+| Key | Type | Description | Default value |
+| | - | -- | - |
+| `request_timeout_ms` | integer | The scoring timeout in milliseconds. | `5000` |
+| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Do not change this setting from the default value unless instructed by Microsoft Technical Support or a member of the Azure ML team.** | `1` |
+| `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` |
+
+### ProbeSettings
+
+| Key | Type | Description | Default value |
+| | - | -- | - |
+| `period` | integer | How often (in seconds) to perform the probe. | `10` |
+| `initial_delay` | integer | The number of seconds after the container has started before the probe is initiated. Minimum value is `1`. | `10` |
+| `timeout` | integer | The number of seconds after which the probe times out. Minimum value is `1`. | `2` |
+| `success_threshold` | integer | The minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is `1`. | `1` |
+| `failure_threshold` | integer | When a probe fails, the system will try `failure_threshold` times before giving up. Giving up in the case of a liveness probe means the container will be restarted. In the case of a readiness probe the container will be marked Unready. Minimum value is `1`. | `30` |
+
+### TargetUtilizationScaleSettings
+
+| Key | Type | Description | Default value |
+| | - | -- | - |
+| `type` | const | The scale type | `target_utilization` |
+| `min_instances` | integer | The minimum number of instances to use. | `1` |
+| `max_instances` | integer | The maximum number of instances to scale to. | `1` |
+| `target_utilization_percentage` | integer | The target CPU usage for the autoscaler. | `70` |
+| `polling_interval` | integer | How often the autoscaler should attempt to scale the deployment, in seconds. | `1` |
++
+### ContainerResourceRequests
+
+| Key | Type | Description |
+| | - | -- |
+| `cpu` | string | The number of CPU cores requested for the container. |
+| `memory` | string | The memory size requested for the container |
+| `nvidia.com/gpu` | string | The number of Nvidia GPU cards requested for the container |
+
+### ContainerResourceLimits
+
+| Key | Type | Description |
+| | - | -- |
+| `cpu` | string | The limit for the number of CPU cores for the container. |
+| `memory` | string | The limit for the memory size for the container. |
+| `nvidia.com/gpu` | string | The limit for the number of Nvidia GPU cards for the container |
+
+## Remarks
+
+The `az ml online-deployment` commands can be used for managing Azure Machine Learning Kubernetes online deployments.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online). Several are shown below.
+
+## YAML: sample deployments
+++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
-- Previously updated : 10/21/2021-++ Last updated : 03/31/2022+ # CLI (v2) managed online deployment YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `model` | string or object | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios.| | | | `model_mount_path` | string | The path to mount the model in a custom container. Applicable only for [custom container deployment](how-to-deploy-custom-container.md) scenarios. If the `model` field is specified, it is mounted on this path in the container. | | | | `code_configuration` | object | Configuration for the scoring code logic. <br><br> This field is optional for [custom container deployment](how-to-deploy-custom-container.md) scenarios. | | |
-| `code_configuration.code.local_path` | string | Local path to the source code directory for scoring the model. | | |
+| `code_configuration.code` | string | Local path to the source code directory for scoring the model. | | |
| `code_configuration.scoring_script` | string | Relative path to the scoring file in the source code directory. | | |
+| `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | |
| `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `instance_type` | string | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | | | `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. | | | | `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` |
-| `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you do not need to specify this property. <br><br> With this `default` scale type, you can either 1) manually scale the instance count up and down after deployment creation by updating the `instance_count` property or 2) create an [autoscaling policy](). | | |
+| `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you do not need to specify this property. <br><br> With this `default` scale type, you can either manually scale the instance count up and down after deployment creation by updating the `instance_count` property, or create an [autoscaling policy](how-to-autoscale-endpoints.md). | | |
| `scale_settings.type` | string | The scale type. | `default` | `default` | | `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `initial_delay` | integer | The number of seconds after the container has started before the probe is initiated. Minimum value is `1`. | `10` | | `timeout` | integer | The number of seconds after which the probe times out. Minimum value is `1`. | `2` | | `success_threshold` | integer | The minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is `1`. | `1` |
-| `failure_threshold` | integer | When a probe fails, the system will try `failure_threshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the container will be marked Unready. Minimum value is `1`. | `30` |
+| `failure_threshold` | integer | When a probe fails, the system will try `failure_threshold` times before giving up. Giving up in the case of a liveness probe means the container will be restarted. In the case of a readiness probe the container will be marked Unready. Minimum value is `1`. | `30` |
## Remarks
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
+
+ Title: 'Online endpoints YAML reference'
+
+description: Learn about the YAML files used to deploy models as online endpoints
++++++++ Last updated : 03/31/2022+++
+# CLI (v2) online endpoint YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json.
+++
+> [!NOTE]
+> A fully specified sample YAML for online endpoints is available for [reference](https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.template.yaml)
+
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `name` | string | **Required.** Name of the endpoint. Needs to be unique at the Azure region level. <br><br> Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
+| `description` | string | Description of the endpoint. | | |
+| `tags` | object | Dictionary of tags for the endpoint. | | |
+| `auth_mode` | string | The authentication method for the endpoint. Key-based authentication and Azure ML token-based authentication are supported. Key-based authentication doesn't expire but Azure ML token-based authentication does. | `key`, `aml_token` | `key` |
+| `compute` | string | Name of the compute target to run the endpoint deployments on. This field is only applicable for endpoint deployments to Azure Arc-enabled Kubernetes clusters (the compute target specified in this field must have `type: kubernetes`). Do not specify this field if you are doing managed online inference. | | |
+| `identity` | object | The managed identity configuration for accessing Azure resources for endpoint provisioning and inference. | | |
+| `identity.type` | string | The type of managed identity. If the type is `user_assigned`, the `identity.user_assigned_identities` property must also be specified. | `system_assigned`, `user_assigned` | |
+| `identity.user_assigned_identities` | array | List of fully qualified resource IDs of the user-assigned identities. | | |
+| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It is represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) to see the traffic configuration in action. <br><br> Note: you cannot set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
+
+## Remarks
+
+The `az ml online-endpoint` commands can be used for managing Azure Machine Learning online endpoints.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online). Several are shown below.
+
+## YAML: basic
++
+## YAML: system-assigned identity
++
+## YAML: user-assigned identity
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
+- Learn how to [deploy a model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)
+- [Troubleshooting managed online endpoints deployment and scoring (preview)](./how-to-troubleshoot-online-endpoints.md)
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+ # CLI (v2) environment YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `image` | string | The Docker image to use for the environment. **One of `image` or `build` is required.** | | | | `conda_file` | string or object | The standard conda YAML configuration file of the dependencies for a conda environment. See https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually. <br> <br> If specified, `image` must be specified as well. Azure ML will build the conda environment on top of the Docker image provided. | | | | `build` | object | The Docker build context configuration to use for the environment. **One of `image` or `build` is required.** | | |
-| `build.local_path` | string | Local path to the directory to use as the build context. | | |
+| `build.path` | string | Local path to the directory to use as the build context. | | |
| `build.dockerfile_path` | string | Relative path to the Dockerfile within the build context. | | `Dockerfile` | | `os_type` | string | The type of operating system. | `linux`, `windows` | `linux` | | `inference_config` | object | Inference container configurations. Only applicable if the environment is used to build a serving container for online deployments. See [Attributes of the `inference_config` key](#attributes-of-the-inference_config-key). | | |
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
Previously updated : 10/21/2021- Last updated : 03/31/2022+ # CLI (v2) command job YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure ML will default it to the name of the working directory where the job was created. | | | | `description` | string | Description of the job. | | | | `tags` | object | Dictionary of tags for the job. | | |
-| `command` | string | **Required.** The command to execute. | | |
-| `code.local_path` | string | Local path to the source code directory to be uploaded and used for the job. | | |
-| `environment` | string or object | **Required.** The environment to use for the job. This can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment use the `azureml:<environment_name>:<environment_version>` syntax. <br><br> To define an environment inline please follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | |
-| `environment_variables` | object | Dictionary of environment variable name-value pairs to set on the process where the command is executed. | | |
+| `command` | string | **Required (if not using `component` field).** The command to execute. | | |
+| `code` | string | Local path to the source code directory to be uploaded and used for the job. | | |
+| `environment` | string or object | **Required (if not using `component` field).** The environment to use for the job. This can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline please follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | |
+| `environment_variables` | object | Dictionary of environment variable key-value pairs to set on the process where the command is executed. | | |
| `distribution` | object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). | | | | `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. | | `local` | | `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` |
+| `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-kubernetes-instance-type.md). | | |
| `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. | | | | `inputs` | object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `command` using the `${{ inputs.<input_name> }}` expression. | | |
-| `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string), [JobInputUri](#jobinputuri), or [JobInputDataset](#jobinputdataset). | | |
+| `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
| `outputs` | object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration. <br><br> Outputs can be referenced in the `command` using the `${{ outputs.<output_name> }}` expression. | |
-| `outputs.<output_name>` | object | You can either specify an optional `mode` or leave the object empty. For each named output specified in the `outputs` dictionary, Azure ML will autogenerate an output location. | |
-| `outputs.<output_name>.mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode the output directory will be a mounted directory. For upload mode the files written to the output directory will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
+| `outputs.<output_name>` | object | You can leave the object empty, in which case by default the output will be of type `uri_folder` and Azure ML will system-generate an output location for the output. File(s) to the output directory will be written via read-write mount. If you want to specify a different mode for the output, provide an object containing the [job output specification](#job-outputs). | |
### Distribution configurations
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
### Job inputs
-#### JobInputUri
- | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `file` | string | URI to a single file to use as input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. **One of `file` or `folder` is required.** | | |
-| `folder` | string | URI to a folder to use as input. Supported URI types are `azureml`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. **One of `file` or `folder` is required.** | | |
-| `mode` | string | Mode of how the data should be delivered to the compute target. For read-only mount and read-write mount the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. For download mode the data will be consumed as a downloaded path. | `ro_mount`, `rw_mount`, `download` | `ro_mount` |
+| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
+| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
-#### JobInputDataset
+### Job outputs
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `dataset` | string or object | **Required.** A dataset to use as input. This can be either a reference to an existing versioned dataset in the workspace or an inline dataset specification. <br><br> To reference an existing dataset use the `azureml:<dataset_name>:<dataset_version>` syntax. <br><br> To define a dataset inline please follow the [Dataset schema](reference-yaml-dataset.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline datasets. | | |
-| `mode` | string | Mode of how the dataset should be delivered to the compute target. For read-only mount the dataset will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as the parent folder. For download mode the dataset will be consumed as a downloaded path. | `ro_mount`, `download` | `ro_mount` |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
## Remarks
machine-learning Reference Yaml Job Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-component.md
- Title: 'CLI (v2) component job YAML schema'-
-description: Reference documentation for the CLI (v2) component job YAML schema.
-------- Previously updated : 10/21/2021---
-# CLI (v2) component job YAML schema
--
-The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/commandJob.schema.json.
---
-## YAML syntax
-
-| Key | Type | Description | Allowed values | Default value |
-| | - | -- | -- | - |
-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
-| `type` | const | The type of job. | `component` | |
-| `component` | object | **Required.** The component to invoke and run in a job. This value can be either a reference to an existing versioned component in the workspace, an inline component specification, or the local path to a separate component YAML specification file. <br><br> To reference an existing component, use the `azureml:<component-name>:<component-version>` syntax. <br><br> To define a component inline or in a separate YAML file, follow the [Command component schema](reference-yaml-component-command.md#yaml-syntax). Exclude the `name` and `version` properties as they are not applicable for inline component specifications. | | |
-| `compute` | string | Name of the compute target to execute the job on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. If omitted, Azure ML will use the compute defined in the pipeline job's `compute` property. | | |
-| `inputs` | object | Dictionary of inputs to the job. The key corresponds to the name of one of the component inputs and the value is the runtime input value. <br><br> Inputs c